text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
ARPA-E, orAdvanced Research Projects Agency–Energyis an agency within theUnited States Department of Energytasked with funding the research and development of advanced energy technologies.[1]The goal of the agency is to improve U.S. economic prosperity, national security, and environmental well being. ARPA-E typically funds short-term research projects with the potential for a transformative impact.[2]It is inspired by theDefense Advanced Research Projects Agency(DARPA).[3]
The program directors at ARPA-E serve limited terms, in an effort to reduce bureaucracy and bias.[1]Since January 2023,[update]the director isEvelyn Wang.[4]
ARPA-E was initially conceived by a report by theNational AcademiesentitledRising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future. The report described a need for the US to stimulate innovation and develop clean, affordable, and reliable energy.[5]ARPA-E was officially created by theAmerica COMPETES Act, authored by CongressmanBart Gordon,[6]within theUnited States Department of Energy(DOE) in 2007, though without a budget. The initial budget of about $400 million was a part of theeconomic stimulus billof February 2009.[7]In early January 2011, theAmerica COMPETES Reauthorization Act of 2010made additional changes to ARPA-E's structure; this structure is codified in Title 42, Chapter 149, Subchapter XVII, § 16538 of the United States Code.
Among its main provisions, Section 16538 provides that ARPA-E shall achieve its goals through energy technology projects by doing the following:
Like DARPA does for military technology, ARPA-E is intended to fund high-risk, high-reward research involving government labs, private industry, and universities that might not otherwise be pursued.[8]ARPA-E has four objectives:
ARPA-E was created as part of the America COMPETES act signed by PresidentGeorge W. Bushin August 2007. PresidentBarack Obamaannounced the launch of ARPA-E on April 27, 2009 as part of an announcement about federal investment in research and development and science education. Soon after its launch, ARPA-E released its firstFunding Opportunity Announcement, offering $151 million in total with individual awards ranging from $500,000 to $9 million. Applicants submitted eight-page "concept papers" that outlined the technical concept; some were invited to submit full applications.[9]
Arun Majumdar, former deputy director of theLawrence Berkeley National Laboratory, was appointed the first director of ARPA-E in September 2009, over six months after the organization was first funded.[10]U.S. Secretary of EnergySteven Chupresided over the inaugural ARPA-E Energy Innovation Summit on March 1–3, 2010 inWashington, D.C.[11]
2006
The National Academies released “Rising Above the Gathering Storm” report.
August 9, 2007
PresidentGeorge W. Bushsigned into law the America COMPETES Act that codified many of the recommendations in the National Academies report, thus creating ARPA-E.
April 27, 2009
PresidentBarack Obamaallocated $400 million in funding to ARPA-E from theAmerican Recovery and Reinvestment Act of 2009.
September 18, 2009
President Barack Obama nominatedArun Majumdaras Director of ARPA-E.
October 22, 2009
Senate confirmed Arun Majumdar as ARPA-E's first Director.
October 26, 2009
Department of Energy awarded $151 million in Recovery Act funds for 37 energy research projects under ARPA-E's first Funding Opportunity Announcement.
December 7, 2009
U.S. Secretary of EnergySteven Chuannounced ARPA-E's second round of funding opportunities in the areas of “Electrofuels”, “Innovative Materials & Processes for Advanced Carbon Capture Technologies (IMPACCT),” and “Batteries for Electrical Energy Storage in Transportation (BEEST).”
March 1 – 3, 2010
ARPA-E hosted the inaugural “Energy Innovation Summit” which attracted over 1,700 participants.
March 2, 2010
U.S. Secretary of Energy Steven Chu announced ARPA-E's third round of funding opportunity in the areas of “Grid-Scale Rampable Intermittent Dispatchable Storage (GRIDS),” “Agile Delivery of Electrical Power Technology (ADEPT),” and “Building Energy Efficiency Through Innovative Thermodevices (BEET-IT).”
April 29, 2010
Vice PresidentJoe Bidenannounced 37 awarded projects under ARPA-E's second funding opportunity.
July 12, 2010
Department of Energy awarded $92 Million for 42 research projects under ARPA-E's third funding opportunity.
December 8, 2014
Ellen Williamsconfirmed by Senate as Director of ARPA-E.[12]
June 28, 2019
Lane Genatowski confirmed by Senate as Director of ARPA-E.[13]
December 22, 2022
Evelyn Wangwas confirmed by the Senate as director of ARPA-E.
ARPA-E was created to fund energy technology projects that translate scientific discoveries and inventions into technological innovations, and accelerate technological advances in high-risk areas that industry is not likely to pursue independently. This goal is similar to the work of the U.S. Department of Energy'sOffice of Energy Efficiency and Renewable Energy (EERE)which advances clean energy projects according to established roadmaps.[14]However, ARPA-E also funds advanced technology in other spaces such as natural gas and grid technology.[15][16][17]ARPA-E does not fund incremental improvements to existing technologies or roadmaps established by existing DOE programs.[18]
ARPA-E programs are created through a process of debate surrounding the technical/scientific merits and challenges of potential research areas. Programs must satisfy both “technology push”—the technical merit of innovative platform technologies that can be applied to energy systems—and “market pull”—the potential market impact and cost-effectiveness of the technology.
The program creation process begins with a “deep dive” where an energy problem is explored to identify potential topics for program development. ARPA-E Program Directors then hold technical workshops to gather input from experts in various disciplines about current and upcoming technologies. To date, ARPA-E has hosted or co-hosted 13 technical workshops.
Following each workshop, the Program Director proposes a new program and defends the program against a set of criteria that justifies its creation. The Program Director then refines the program, incorporating feedback, and seeks approval from the Director. If successful, a new ARPA-E program is created, and a funding opportunity announcement (FOA) is released soliciting project proposals.
The ARPA-E peer review process is designed to help drive program success. During proposal review, ARPA-E solicits external feedback from leading experts in a particular field. ARPA-E reviewers evaluate applications over several weeks and then convene a review panel.
One notable facet of ARPA-E's evaluation process is the opportunity for the applicant to read reviewers’ comments and provide a rebuttal that the Agency reviews before making funding decisions. The applicant response period allows ARPA-E to avoid misunderstandings by asking clarifying questions that enable ARPA-E to make informed decisions.[5]
The U.S. Department of Energy and ARPA-E awarded $151 million inAmerican Recovery and Reinvestment Actfunds on October 26, 2009 for 37 energy research projects. It supportedrenewable energytechnologies forsolar cells,wind turbines,geothermal drilling,biofuels, andbiomass energy crops. The grants also supportedenergy efficiencytechnologies, includingpower electronicsandengine-generatorsforadvanced vehicles, devices forwaste heat recovery,smart glassand control systems forsmart buildings,light-emitting diodes(LEDs),reverse-osmosis membranesforwater desalination,catalyststo split water into hydrogen and oxygen, improvedfuel cellmembranes, and more energy-densemagnetic materialsfor electronic components. Six grants went toenergy storagetechnologies, including anultracapacitor, improvedlithium-ion batteries,metal-air batteriesthat useionic liquids,liquid sodium batteries, andliquid metal batteries.[19][20][21]Other awards went to projects that conducted research and development on abioreactorwith potential to produce gasoline directly from sunlight and carbon dioxide, andcrystal growthtechnology to lower the cost oflight emitting diodes.[19][20][21]
The U.S. Secretary of EnergySteven Chuannounced a second round of ARPA-E funding opportunities on December 7, 2009.[22]ARPA-E solicited projects that focused on three critical areas: Biofuels from Electricity (Electrofuels), Batteries for Electrical Energy Storage in Transportation (BEEST), and Innovative Materials and Processes for Advanced Carbon Capture Technologies (IMPACCT). On April 29, 2010, Vice PresidentBidenannounced the 37 awardees that ARPA-E had selected from over 540 initial concept papers.[23]The awards ranged from around $500,000 to $6 million and involved a variety of national laboratories, universities, and companies.[24]
Unlike the First Funding Opportunity, the Second Funding Opportunity designated project submissions by category. Of the selected projects, 14 focused on IMPAACT, 13 focused on Electrofuels, and 10 focused on BEEST. For example,Harvard Medical Schoolsubmitted a project under Electrofuels entitled "Engineering a Bacterial Reverse Fuel Cell," which focuses on development of a bacterium that can convert carbon dioxide into gasoline.MITreceived an award under BEEST for a proposal entitled "Semi-Solid Rechargeable Fuel Battery," a concept for producing lighter, smaller, and cheaper vehicle batteries. IMPAACT projects included theGE Global Research Center's "CO2 Capture Process Using Phase-Changing Absorbents," which focuses on a liquid that turns solid when exposed to carbon dioxide.[23]
On March 2, 2010, at the inaugural ARPA-E Energy Innovation Summit, U.S. Energy Secretary Steven Chu announced a third funding opportunity for ARPA-E projects. Like the second funding opportunity, ARPA-E solicited projects by category: Grid-Scale Rampable Intermittent Dispatchable Storage (GRIDS), Agile Delivery of Electrical Power Technology (ADEPT), and Building Energy Efficiency Through Innovative Thermodevices (BEET-IT). GRIDS welcomed projects that focused on widespread deployment of cost-effective grid-scale energy storage in two specific areas: 1) proof of concept storage component projects focused on validating new, over-the-horizon electrical energy storage concepts, and 2) advanced system prototypes that address critical shortcomings of existing grid-scale energy storage technologies. ADEPT focused on investing in materials for fundamental advances in soft magnetics, high voltage switches, and reliable, high-density charge storage in three categories: 1) fully integrated, chip-scale power converters for applications including, but not limited to, compact, efficient drivers for solid-state lighting, distributed micro-inverters for photovoltaics, and single-chip power supplies for computers, 2) kilowatt scale package integrated power converters by enabling applications such as low-cost, efficient inverters for grid-tied photovoltaics and variable speed motors, and 3) lightweight, solid-state, medium voltage energy conversion for high power applications such as solid-state electrical substations and wind turbine generators. BEET-IT solicited projects regarding energy efficient cooling technologies and air conditioners (AC) for buildings to save energy and reduce GHG emissions in the following areas: 1) cooling systems that use refrigerants with lowglobal warming potential; 2) energy efficient air conditioning (AC) systems for warm and humid climates with an increasedcoefficient of performance(COP); and 3) vapor compression AC systems for hot climates for re-circulating air loads with an increased COP.[25]
Secretary Chu announced the selection of 43 projects under GRIDS, ADEPT, and BEET-IT on July 12, 2010. The awards totaled $92 million and ranged from $400,000 to $5 million. The awards included 14 projects in ADEPT, 17 projects in BEET-IT, and 12 projects in GRIDS. Examples of awarded projects include a "Soluble Acid Lead Flow Battery" that pumps chemicals through a battery cell when electricity is needed (GRIDS), "Silicon Carbide Power Modules for Grid Scale Power Conversion" that uses advanced transistors to make the electrical grid more flexible and controllable (ADEPT), and an "Absorption-Osmosis Cooling Cycle," a new air conditioning system that uses water as a refrigerant, rather than chemicals (BEET-IT).[26]
ARPA-E's fourth round of funding was announced on April 20, 2011 and awarded projects in five technology areas: Plants Engineered To Replace Oil (PETRO), High Energy Advanced Thermal Storage (HEATS), Rare Earth Alternatives in Critical Technologies (REACT), Green Electricity Network Integration (GENI), and Solar Agile Delivery of Electrical Power Technology (Solar ADEPT). PETRO focused on projects that had systems to create biofuels from domestic sources such as tobacco and pine trees for half their current cost. REACT funded early-stage technology alternatives that reduced or eliminated the dependence onrare earthmaterials by developing substitutes in two key areas: electric vehicle motors and wind generators. HEATS funded projects that promoted advancement in thermal energy storage technology. GENI focused on funding software and hardware that could reliably control the grid network. Solar ADEPT accepted projects that integrated power electronics into solar panels and solar farms to extract and deliver energy more efficiently.
The Awardees for the fourth funding opportunity were announced on September 29, 2011. The 60 projects received $156 million from the ARPA-E Fiscal Year 2011 budget. Examples of the awarded projects included a project that increases the production ofturpentine, a natural liquid biofuel (PETRO); a project entitled "Manganese-Based Permanent Magnet," that reduces the cost of wind turbines and electric vehicles by developing a replacement for rare earth magnets based on an innovative composite using manganese materia (REACT); a project entitled "HybriSol," that develops a heat battery to store energy from the sun (HEATS); a project that develops a new system that allows real-time, automated control over the transmission lines that make up the electric power grid (GENI); and a project that develops light-weight electronics to connect tophotovoltaic solar panelsto be installed on walls or rooftops.[27]
Since 2010, ARPA-E has hosted the Energy Innovation Summit. The 10th Summit was held July 8–10, 2019 inDenver, Colorado, and the 11th Summit was held March 17–19, 2021 at the Gaylord Convention Center, near Washington, D.C.[28]
ARPA-E has generated over 1,000 projects since inception, attracted about $4.9 billion in private investment for 179 of these projects, with $2.6 billion invested in R&D by the US government. Published, peer reviewed research articles are also a significant output, totaling 4,614. In addition, the program has generated 716 patents.[29]
|
https://en.wikipedia.org/wiki/Advanced_Research_Projects_Agency%E2%80%93Energy
|
Ananti-patterninsoftware engineering,project management, andbusiness processesis a common response to a recurring problem that is usually ineffective and risks being highly counterproductive.[1][2]The term, coined in 1995 by computer programmerAndrew Koenig, was inspired by the bookDesign Patterns(which highlights a number ofdesign patternsinsoftware developmentthat its authors considered to be highly reliable and effective) and first published in his article in theJournal of Object-Oriented Programming.[3]A further paper in 1996 presented by Michael Ackroyd at the Object World West Conference also documented anti-patterns.[3]
It was, however, the 1998 bookAntiPatternsthat both popularized the idea and extended its scope beyond the field of software design to include software architecture and project management.[3]Other authors have extended it further since to encompass environmental, organizational, and cultural anti-patterns.[4]
According to the authors ofDesign Patterns, there are two key elements to an anti-pattern that distinguish it from a bad habit, bad practice, or bad idea:
A guide to what is commonly used is a "rule-of-three" similar to that for patterns: to be an anti-pattern it must have been witnessed occurring at least three times.[5]
Documenting anti-patterns can be an effective way to analyze a problem space and to capture expert knowledge.[6]
While some anti-pattern descriptions merely document the adverse consequences of the pattern, good anti-pattern documentation also provides an alternative, or a means to ameliorate the anti-pattern.[7]
In software engineering, anti-patterns include thebig ball of mud(lack of) design, thegod object(where a singleclasshandles all control in aprogramrather than control being distributed across multiple classes),magic numbers(unique values with an unexplained meaning or multiple occurrences which could be replaced with a named constant), andpoltergeists(ephemeral controller classes that only exist to invoke other methods on classes).[7]
This indicates asoftware systemthat lacks a perceivable architecture. Although undesirable from a software engineering point of view, such systems are common in practice due to business pressures, developerturnoverandcode entropy.
The term was popularized inBrian Footeand Joseph Yoder's 1997 paper of the same name, which defines the term:
A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire,spaghetti-codejungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated.
The overall structure of the system may never have been well defined.
If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.
Foote and Yoder have credited Brian Marick as the originator of the "big ball of mud" term for this sort of architecture.[8]
Project management anti-patterns included in theAntipatternsbook include:
|
https://en.wikipedia.org/wiki/Anti-pattern
|
Incomputer networking,Energy-Efficient Ethernet(EEE) is a set of enhancements totwisted-pair,twinaxial,backplane, andoptical fiberEthernet physical-layer variantsthat reduce power consumption during periods of low data activity.[1]The intention is to reduce power consumption by at least half, while retaining full compatibility with existing equipment.[2]
TheInstitute of Electrical and Electronics Engineers(IEEE), through theIEEE 802.3aztask force, developed the standard. The first study group had its call for interest in November 2006, and the official standards task force was authorized in May 2007.[3]The IEEE ratified the final standard in September 2010.[4]Some companies introduced technology to reduce the power required for Ethernet before the standard was ratified, using the nameGreen Ethernet.
Some energy-efficient switchintegrated circuitswere developed before the IEEE 802.3az Energy-Efficient Ethernet standard was finalized.[5][6]
In 2005, all thenetwork interface controllersin the United States (in computers, switches, and routers) used an estimated 5.3 terawatt-hours of electricity.[7]According to a researcher at theLawrence Berkeley Laboratory, Energy-Efficient Ethernet can potentially save an estimatedUS$450million a year in energy costs in the US. Most of the savings would come from homes (US$200million) and offices (US$170million), and the remainingUS$80million from data centers.[8]
The power reduction is accomplished in a few ways. InFast Ethernetand faster links, constant and significant energy is used by thephysical layeras transmitters are active regardless of whether data is being sent. If they could be put into sleep mode when no data is being sent, that energy could be saved.[8]When the controlling software or firmware decides that no data needs to be sent, it can issue a low-power idle (LPI) request to the Ethernet controller physical layerPHY. The PHY will then send LPI symbols for a specified time onto the link, and then disable its transmitter. Refresh signals are sent periodically to maintain link signaling integrity. When there is data to transmit, a normal IDLE signal is sent for a predetermined period of time. The data link is considered to be always operational, as the receive signal circuit remains active even when the transmit path is in sleep mode.[9]
Green Ethernet technology was a superset of the 802.3az standard. In addition to the link load power savings of Energy-Efficient Ethernet, Green Ethernet works in one of two ways. First, it detects link status, allowing each port on the switch to power down into a standby mode when a connected device, such as a computer, is not active. Second, it detects cable length and adjusts the power used for transmission accordingly. Standard switches provide enough power to send a signal up to 100 meters (330 ft).[10]However, this is often unnecessary in the SOHO environment, where 5 to 10 meters (16 to 33 ft) of cabling are typical between rooms. Moreover, small data centers can also benefit from this approach since the majority of cabling is confined to a single room with a few meters of cabling among servers and switches. In addition to the pure power-saving benefits of Green Ethernet, backing off the transmit power on shorter cable runs reduces alien crosstalk and improves the overall performance of the cabling system.
Green Ethernet also encompasses the use of more efficient circuitry in Ethernet chips, and the use ofoffload engineson Ethernet interface cards intended for network servers.[6]In April 2008, the term was used for switches, and, in July 2008, used with wireless routers that featured user-selectable off periods forWi-Fito further reduce energy consumption.[11]
Projected power savings of up to 80 percent were predicted using Green Ethernet switches,[12]translating into a longer product life due to reduced heat.[13]
|
https://en.wikipedia.org/wiki/Energy-Efficient_Ethernet
|
Incomputing,cache replacement policies(also known ascache replacement algorithmsorcache algorithms) areoptimizinginstructions oralgorithmswhich acomputer programor hardware-maintained structure can utilize to manage acacheof information. Caching improves performance by keeping recent or often-used data items in memory locations which are faster, or computationally cheaper to access, than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for new data.
The average memory reference time is[1]
where
A cache has two primary figures of merit: latency and hit ratio. A number of secondary factors also affect cache performance.[1]
The hit ratio of a cache describes how often a searched-for item is found. More efficient replacement policies track more usage information to improve the hit rate for a given cache size.
The latency of a cache describes how long after requesting a desired item the cache can return that item when there is a hit. Faster replacement strategies typically track of less usage information—or, with a direct-mapped cache, no information—to reduce the time required to update the information. Each replacement strategy is a compromise between hit rate and latency.
Hit-rate measurements are typically performed onbenchmarkapplications, and the hit ratio varies by application. Video and audio streaming applications often have a hit ratio near zero, because each bit of data in the stream is read once (a compulsory miss), used, and then never read or written again. Many cache algorithms (particularlyLRU) allow streaming data to fill the cache, pushing out information which will soon be used again (cache pollution).[2]Other factors may be size, length of time to obtain, and expiration. Depending on cache size, no further caching algorithm to discard items may be needed. Algorithms also maintaincache coherencewhen several caches are used for the same data, such as multiple database servers updating a shared data file.
The most efficient caching algorithm would be to discard information which would not be needed for the longest time; this is known asBélády's optimal algorithm, optimal replacement policy, orthe clairvoyant algorithm. Since it is generally impossible to predict how far in the future information will be needed, this is unfeasible in practice. The practical minimum can be calculated after experimentation, and the effectiveness of a chosen cache algorithm can be compared.
When apage faultoccurs, a set of pages is in memory. In the example, the sequence of 5, 0, 1 is accessed by Frame 1, Frame 2, and Frame 3 respectively. When 2 is accessed, it replaces value 5 (which is in frame 1, predicting that value 5 will not be accessed in the near future. Because a general-purpose operating system cannot predict when 5 will be accessed, Bélády's algorithm cannot be implemented there.
Random replacement selects an item and discards it to make space when necessary. This algorithm does not require keeping any access history. It has been used inARM processorsdue to its simplicity,[3]and it allows efficientstochasticsimulation.[4]
With this algorithm, the cache behaves like aFIFO queue; it evicts blocks in the order in which they were added, regardless of how often or how many times they were accessed before.
The cache behaves like astack, and unlike a FIFO queue. The cache evicts the block added most recently first, regardless of how often or how many times it was accessed before.
SIEVEis a simple eviction algorithm designed specifically for web caches, such as key-value caches and Content Delivery Networks.
It uses the idea of lazy promotion and quick demotion.[5]Therefore, SIEVE does not update the global data structure at cache hits and delays the update till eviction time; meanwhile, it quickly evicts newly inserted objects because cache workloads tend to show high one-hit-wonder ratios, and most of the new objects are not worthwhile to be kept in the cache. SIEVE uses a single FIFO queue and uses a moving hand to select objects to evict. Objects in the cache have one bit of metadata indicating whether the object has been requested after being admitted into the cache. The eviction hand points to the tail of the queue at the beginning and moves toward the head over time. Compared with the CLOCK eviction algorithm, retained objects in SIEVE stay in the old position. Therefore, new objects are always at the head, and the old objects are always at the tail. As the hand moves toward the head, new objects are quickly evicted (quick demotion), which is the key to the high efficiency in the SIEVE eviction algorithm. SIEVE is simpler than LRU, but achieves lower miss ratios than LRU on par with state-of-the-art eviction algorithms. Moreover, on stationary skewed workloads, SIEVE is better than existing known algorithms including LFU.[6]
Discards least recently used items first. This algorithm requires keeping track of what was used and when, which is cumbersome. It requires "age bits" forcache lines, and tracks the least recently used cache line based on these age bits. When a cache line is used, the age of the other cache lines changes. LRU isa family of caching algorithms, that includes 2Q by Theodore Johnson and Dennis Shasha[7]and LRU/K by Pat O'Neil, Betty O'Neil and Gerhard Weikum.[8]The access sequence for the example is A B C D E D F:
When A B C D is installed in the blocks with sequence numbers (increment 1 for each new access) and E is accessed, it is amissand must be installed in a block. With the LRU algorithm, E will replace A because A has the lowest rank (A(0)). In the next-to-last step, D is accessed and the sequence number is updated. F is then accessed, replacing B – which had the lowest rank, (B(1)).
Time-aware, least-recently-used (TLRU)[9]is a variant of LRU designed for when the contents of a cache have a valid lifetime. The algorithm is suitable for network cache applications such asinformation-centric networking(ICN),content delivery networks(CDNs) and distributed networks in general. TLRU introduces a term: TTU (time to use), a timestamp of content (or a page) which stipulates the usability time for the content based on its locality and the content publisher. TTU provides more control to a local administrator in regulating network storage.
When content subject to TLRU arrives, a cachenodecalculates the local TTU based on the TTU assigned by the content publisher. The local TTU value is calculated with a locally-defined function. When the local TTU value is calculated, content replacement is performed on a subset of the total content of the cache node. TLRU ensures that less-popular and short-lived content is replaced with incoming content.
Unlike LRU, MRU discards the most-recently-used items first. At the 11thVLDB conference, Chou and DeWitt said: "When a file is being repeatedly scanned in a [looping sequential] reference pattern, MRU is the bestreplacement algorithm."[10]Researchers presenting at the 22nd VLDB conference noted that for random access patterns and repeated scans over largedatasets(also known as cyclic access patterns), MRU cache algorithms have more hits than LRU due to their tendency to retain older data.[11]MRU algorithms are most useful in situations where the older an item is, the more likely it is to be accessed. The access sequence for the example is A B C D E C D B:
A B C D are placed in the cache, since there is space available. At the fifth access (E), the block which held D is replaced with E since this block was used most recently. At the next access (to D), C is replaced since it was the block accessed just before D.
An SLRU cache is divided into two segments: probationary and protected. Lines in each segment are ordered from most- to least-recently-accessed. Data from misses is added to the cache at the most-recently-accessed end of the probationary segment. Hits are removed from where they reside and added to the most-recently-accessed end of the protected segment; lines in the protected segment have been accessed at least twice. The protected segment is finite; migration of a line from the probationary segment to the protected segment may force the migration of the LRU line in the protected segment to the most-recently-used end of the probationary segment, giving this line another chance to be accessed before being replaced. The size limit of the protected segment is an SLRU parameter which varies according toI/Oworkload patterns. When data must be discarded from the cache, lines are obtained from the LRU end of the probationary segment.[12]
LRU may be expensive in caches with higherassociativity. Practical hardware usually employs an approximation to achieve similar performance at a lower hardware cost.
ForCPU cacheswith largeassociativity(generally > four ways), the implementation cost of LRU becomes prohibitive. In many CPU caches, an algorithm that almost always discards one of the least recently used items is sufficient; many CPU designers choose a PLRU algorithm, which only needs one bit per cache item to work. PLRU typically has a slightly-worse miss ratio, slightly-betterlatency, uses slightly less power than LRU, and has a loweroverheadthan LRU.
Bits work as a binary tree of one-bit pointers which point to a less-recently-used sub-tree. Following the pointer chain to the leaf node identifies the replacement candidate. With an access, all pointers in the chain from the accessed way's leaf node to the root node are set to point to a sub-tree which does not contain the accessed path. The access sequence in the example is A B C D E:
When there is access to a value (such as A) and it is not in the cache, it is loaded from memory and placed in the block where the arrows are pointing in the example. After that block is placed, the arrows are flipped to point the opposite way. A, B, C and D are placed; E replaces A as the cache fills because that was where the arrows were pointing, and the arrows which led to A flip to point in the opposite direction (to B, the block which will be replaced on the next cache miss).
The LRU algorithm cannot be implemented in the critical path of computer systems, such asoperating systems, due to its high overhead;Clock, an approximation of LRU, is commonly used instead. Clock-Pro is an approximation ofLIRSfor low-cost implementation in systems.[13]Clock-Pro has the basic Clock framework, with three advantages. It has three "clock hands" (unlike Clock's single "hand"), and can approximately measure the reuse distance of data accesses. Like LIRS, it can quickly evict one-time-access or low-localitydata items. Clock-Pro is as complex as Clock, and is easy to implement at low cost. The buffer-cache replacement implementation in the 2017 version ofLinuxcombines LRU and Clock-Pro.[14][15]
The LFU algorithm counts how often an item is needed; those used less often are discarded first. This is similar to LRU, except that how many times a block was accessed is stored instead of how recently. While running an access sequence, the block which was used the fewest times will be removed from the cache.
The least frequent recently used (LFRU)[16]algorithm combines the benefits of LFU and LRU. LFRU is suitable for network cache applications such asICN,CDNs, and distributed networks in general. In LFRU, the cache is divided into two partitions: privileged and unprivileged. The privileged partition is protected and, if content is popular, it is pushed into the privileged partition. In replacing the privileged partition, LFRU evicts content from the unprivileged partition; pushes content from the privileged to the unprivileged partition, and inserts new content into the privileged partition. LRU is used for the privileged partition and an approximated LFU (ALFU) algorithm for the unprivileged partition.
A variant, LFU with dynamic aging (LFUDA), uses dynamic aging to accommodate shifts in a set of popular objects; it adds a cache-age factor to the reference count when a new object is added to the cache or an existing object is re-referenced. LFUDA increments cache age when evicting blocks by setting it to the evicted object's key value, and the cache age is always less than or equal to the minimum key value in the cache.[17]If an object was frequently accessed in the past and becomes unpopular, it will remain in the cache for a long time (preventing newly- or less-popular objects from replacing it). Dynamic aging reduces the number of such objects, making them eligible for replacement, and LFUDA reducescache pollutioncaused by LFU when a cache is small.
RRIP-style policies are the basis for other cache replacement policies, including Hawkeye.[18]
RRIP[19]is a flexible policy, proposed byIntel, which attempts to provide good scan resistance while allowing older cache lines that have not been reused to be evicted. All cache lines have a prediction value, the RRPV (re-reference prediction value), that should correlate with when the line is expected to be reused. The RRPV is usually high on insertion; if a line is not reused soon, it will be evicted to prevent scans (large amounts of data used only once) from filling the cache. When a cache line is reused the RRPV is set to zero, indicating that the line has been reused once and is likely to be reused again.
On a cache miss, the line with an RRPV equal to the maximum possible RRPV is evicted; with 3-bit values, a line with an RRPV of 23- 1 = 7 is evicted. If no lines have this value, all RRPVs in the set are increased by 1 until one reaches it. A tie-breaker is needed, and usually, it is the first line on the left. The increase is needed to ensure that older lines are aged properly and will be evicted if they are not reused.
SRRIP inserts lines with an RRPV value of maxRRPV; a line which has just been inserted will be the most likely to be evicted on a cache miss.
SRRIP performs well normally, but suffers when the working set is much larger than the cache size and causescache thrashing. This is remedied by inserting lines with an RRPV value of maxRRPV most of the time, and inserting lines with an RRPV value of maxRRPV - 1 randomly with a low probability. This causes some lines to "stick" in the cache, and helps prevent thrashing. BRRIP degrades performance, however, on non-thrashing accesses. SRRIP performs best when the working set is smaller than the cache, and BRRIP performs best when the working set is larger than the cache.
DRRIP[19]uses set dueling[20]to select whether to use SRRIP or BRRIP. It dedicates a few sets (typically 32) to use SRRIP and another few to use BRRIP, and uses a policy counter which monitors set performance to determine which policy will be used by the rest of the cache.
Bélády's algorithm is the optimal cache replacement policy, but it requires knowledge of the future to evict lines that will be reused farthest in the future. A number of replacement policies have been proposed which attempt to predict future reuse distances from past access patterns,[21]allowing them to approximate the optimal replacement policy. Some of the best-performing cache replacement policies attempt to imitate Bélády's algorithm.
Hawkeye[18]attempts to emulate Bélády's algorithm by using past accesses by a PC to predict whether the accesses it produces generate cache-friendly (used later) or cache-averse accesses (not used later). It samples a number of non-aligned cache sets, uses a history of length8×the cache size{\displaystyle 8\times {\text{the cache size}}}and emulates Bélády's algorithm on these accesses. This allows the policy to determine which lines should have been cached and which should not, predicting whether an instruction is cache-friendly or cache-averse. This data is then fed into an RRIP; accesses from cache-friendly instructions have a lower RRPV value (likely to be evicted later), and accesses from cache-averse instructions have a higher RRPV value (likely to be evicted sooner). The RRIP backend makes the eviction decisions. The sampled cache andOPTgenerator set the initial RRPV value of the inserted cache lines. Hawkeye won the CRC2 cache championship in 2017,[22]and Harmony[23]is an extension of Hawkeye which improves prefetching performance.
Mockingjay[24]tries to improve on Hawkeye in several ways. It drops the binary prediction, allowing it to make more fine-grained decisions about which cache lines to evict, and leaves the decision about which cache line to evict for when more information is available.
Mockingjay keeps a sampled cache of unique accesses, the PCs that produced them, and their timestamps. When a line in the sampled cache is accessed again, the time difference will be sent to the reuse distance predictor. The RDP uses temporal difference learning,[25]where the new RDP value will be increased or decreased by a small number to compensate for outliers; the number is calculated asw=min(1,timestamp difference16){\displaystyle w=\min \left(1,{\frac {\text{timestamp difference}}{16}}\right)}. If the value has not been initialized, the observed reuse distance is inserted directly. If the sampled cache is full and a line needs to be discarded, the RDP is instructed that the PC that last accessed it produces streaming accesses.
On an access or insertion, the estimated time of reuse (ETR) for this line is updated to reflect the predicted reuse distance. On a cache miss, the line with the highest ETR value is evicted. Mockingjay has results which are close to the optimal Bélády's algorithm.
A number of policies have attempted to useperceptrons,markov chainsor other types ofmachine learningto predict which line to evict.[26][27]Learning augmented algorithmsalso exist for cache replacement.[28][29]
LIRS is a page replacement algorithm with better performance than LRU and other, newer replacement algorithms. Reuse distance is a metric for dynamically ranking accessed pages to make a replacement decision.[30]LIRS addresses the limits of LRU by using recency to evaluate inter-reference recency (IRR) to make a replacement decision.
In the diagram, X indicates that a block is accessed at a particular time. If block A1 is accessed at time 1, its recency will be 0; this is the first-accessed block and the IRR will be 1, since it predicts that A1 will be accessed again in time 3. In time 2, since A4 is accessed, the recency will become 0 for A4 and 1 for A1; A4 is the most recently accessed object, and the IRR will become 4. At time 10, the LIRS algorithm will have two sets: an LIR set = {A1, A2} and an HIR set = {A3, A4, A5}. At time 10, if there is access to A4 a miss occurs; LIRS will evict A5 instead of A2 because of its greater recency.
Adaptive replacement cache(ARC) constantly balances between LRU and LFU to improve the combined result.[31]It improves SLRU by using information about recently-evicted cache items to adjust the size of the protected and probationary segments to make the best use of available cache space.[32]
Clock with adaptive replacement(CAR) combines the advantages of ARC andClock. CAR performs comparably to ARC, and outperforms LRU and Clock. Like ARC, CAR is self-tuning and requires no user-specified parameters.
The multi-queue replacement (MQ) algorithm was developed to improve the performance of a second-level buffer cache, such as a server buffer cache, and was introduced in a paper by Zhou, Philbin, and Li.[33]The MQ cache contains anmnumber of LRU queues: Q0, Q1, ..., Qm-1. The value ofmrepresents a hierarchy based on the lifetime of all blocks in that queue.[34]
Pannier[35]is a container-based flash caching mechanism which identifies containers whose blocks have variable access patterns. Pannier has a priority-queue-based survival-queue structure to rank containers based on their survival time, which is proportional to live data in the container.
Static analysisdetermines which accesses are cache hits or misses to indicate theworst-case execution timeof a program.[36]An approach to analyzing properties of LRU caches is to give each block in the cache an "age" (0 for the most recently used) and compute intervals for possible ages.[37]This analysis can be refined to distinguish cases where the same program point is accessible by paths that result in misses or hits.[38]An efficient analysis may be obtained by abstracting sets of cache states byantichainswhich are represented by compactbinary decision diagrams.[39]
LRU static analysis does not extend to pseudo-LRU policies. According tocomputational complexity theory, static-analysis problems posed by pseudo-LRU and FIFO are in highercomplexity classesthan those for LRU.[40][41]
|
https://en.wikipedia.org/wiki/Cache_replacement_policies
|
TheITU-TRecommendationE.212defines mobile country codes (MCC) as well as mobile network codes (MNC).
Themobile country codeconsists of three decimal digits and the mobile network code consists of two or three decimal digits (for example: MNC of 001 is not the same as MNC of 01). The first digit of the mobile country code identifies the geographic region as follows (the digits 1 and 8 are not used):
An MCC is used in combination with an MNC (a combination known as an "MCC/MNC tuple") to uniquely identify a mobile network operator (carrier) using theGSM(includingGSM-R),UMTS,LTE, and5Gpublic land mobile networks. Some but not allCDMA,iDEN, andsatellitemobile networks are identified with anMCC/MNC tupleas well. ForWiMAXnetworks, a globally unique Broadband Operator ID can be derived from the MCC/MNC tuple.[1]TETRAnetworks use the mobile country code from ITU-T Recommendation E.212 together with a 14-bit binary mobile network code (T-MNC) where only values between 0 and 9999 are used.[2]However, a TETRA network may be assigned an E.212 network code as well.[3]Some network operators do not have their ownradio access networkat all. These are calledmobile virtual network operators(MVNO) and are marked in the tables as such. Note that MVNOs without their own MCC/MNC (that is, they share the MCC/MNC of their host network) are not listed here.
The following tables attempt to provide a complete list of mobile network operators. Country information, includingISO 3166-1 alpha-2country codes is provided for completeness. Mostly for historical reasons, one E.212 MCC may correspond to multiple ISO country codes (e.g., MCC 362 corresponds to BQ, CW, and SX). Some operators also choose to use an MCC outside the geographic area that it was assigned to (e.g. Digicel uses the Jamaica MCC throughout the Caribbean).ITU-Tupdates an official list of mobile network codes in its Operational Bulletins which are published twice a month.[4]ITU-T also publishes complete lists: as of January 2024 list issued on 15 November 2023 was current, having all MCC/MNC before 15 November 2023.[5]The official list is often incomplete as national MNC authorities do not forward changes to the ITU in a timely manner. The official list does not provide additional details such as bands and technologies and may not list disputed territories such asAbkhaziaorKosovo.
|
https://en.wikipedia.org/wiki/Mobile_network_code
|
TheCondorwas an overnight fast freight train service operated byBritish RailwaysbetweenLondonandGlasgowfrom 1959 until 1965 with all freight carried in containers. The name was derived from 'CONtainers DOoR-to-Door'.[1]
Following the1955 Modernisation Plan,British Railwaysembarked on a series of modernisation plans in all areas of operation, including freight.
Faster freight services had been a goal as far back as the end ofWorld War I, with fast, overnight services between majormarshalling yards. 'Liner' or trunked services were scheduled long-haul freight services, between regional freight depots, usually run overnight. If a wagon load was in the marshalling yard that day, it could have a guaranteed next-day arrival at a similar yard, even travelling the length of the country. Part of the goal was to reduce marshalling for the railway company, who wished to concentrate freight marshalling at fewer, larger and better equipped marshalling yards. In 1928, theLNERhad introduced theGreen Arrowservice.
By the 1950s, there were additional target goals: still a faster freight service to be more attractive than the growing competition from road haulage, but mostly a reduction in operating costs by reducing the manual effort needed in handling freight. A key part of this was to becontainerisation, replacing the network of railwaygoods shedsand manual loading in and out of vans, by pre-loaded containers from the customer factories loaded onto railway wagons by mechanical cranes.[2]There would also be a centralisation of freight services: as well as the increasing development of and investment in marshalling yards,[i]as much freight as possible would becomeblock trains, where a single rake of freight wagons shuttled continuously between two large depots, without needing to stop for shunting operations. Containers were key to this: road haulage would provide local flexibility to move the loads to and from the customer warehouses and the rail operation would concentrate on rapid transfers between a handful of large depots.
The 'container' to be used for this traffic was not the modern familiar stackableintermodal containerorTEU, but a much earlier version, the railwayconflat.[3][4]These were smaller, lighter, wooden containers which resembled a demounted railway wagon body, included the curved roof. They dated from the 1920s in design and were sized for lifting by the mobile cranes of the day. The conflat wagons were four-wheeled, vacuum-braked, and could carry either one Type B container or two smaller Type A.
TheCondorwas the exemplar service for this new containerised operation. A single route would operate, linking the manufacturing base ofGlasgowwith the consumers of centralLondon. Return traffic was largely imported raw materials, supplied from London's docks. The route was fromHendonon theMidland Main LineinNorth Londonto Gushetfaulds freight depot, nearGlasgow South Side railway station[5]running viaLeedsand theSettle and Carlisle line.[6]
Each Condor train was of 27 four-wheeled conflats, of a new design withroller bearingaxles to allow the fastest running and without the risk of stopping for a 'hot box'.[4]Each could carry one or two containers, the containers carrying up to 8 long tons (9.0 short tons; 8.1 t). The Conflats for Condor were heavier at 35.5 long tons (39.8 short tons; 36.1 t) than earlier examples and were later given their ownTOPScode of FC.[7][ii]The train's gross weight could be up to 550 long tons (620 short tons; 560 t). The cost of hiring a container in 1962 was £16 or £18, depending on size, and this included road pickup and delivery byBritish Road Serviceslorries, inside Greater London or within 10 miles (16 km) of Glasgow. As well as the fixed formation of 27 conflat wagons, a specific pool of containers was dedicated to the service. Every wagon on the train always ran carrying a container, regardless of direction. If 27 loads were not available to fill every container, the surplus would be carried empty - this was to ensure a good supply of empty containers at both ends of the service to enable rapid loading of goods inbound from customers.
The service ran daily, one each way, and ran overnight to obtain the clearest running. Both left almost simultaneously, after 7 pm and would arrive some time before 6 am. The 10 hour long service required a very brief, two minute,[8]stop atCarlisle,[iii]at the change of a crew shift, rather than any limitation of the train.
The first Condor services were hauled by pairs of the newly builtMetro-Vick Type 2 Co-Bo locomotives, later known as theclass 28.[10]These were 1,200 bhp (890 kW) locomotives, used in pairs. Pairs were needed as thedieselisationprocess was still new to Britain and the more powerfulType 4locomotives were in short supply and in demand for passenger services. The class 28 had a relatively high tractive effort for a Type 2 loco, of 50,000 lbf (220 kN) compared to 42,000 lbf (190 kN) for theSulzer Type 2. They also had five driven axles, rather than four, giving good traction without wheelslip. The Metro-Vicks were fitted formultiple working, so although two locos were needed, there was only one crew.[iv]Their 'Red Circle' connection system of multiple working was not widely used on BR, compared to the contemporary 'Blue Star', and few other classes used it; hence the Metro-Vicks were used throughout. The Condor service was well-suited to the Metro-Vicks, as the night working allowed a relatively constant power output, with little other traffic to cause signals checks.
TheirCrossleytwo-stroke engineswere unreliable though, and prone to black smoke when throttling up.[11]A further, more unusual, problem with the Metro-Vicks was with their front windscreens. These wrapped around the corners of the cab, to give a better view to the sides, but the engine's vibration could be enough to make the glass panes fall out of their frames. When cracking problems with theircrankcasesbecame evident after a few years, the locomotives were withdrawn from service and the engines rebuilt by Crossley.
If a locomotive failed, it was replaced by another, and often this would be aSulzer Type 2as the Metro-Vicks were only stabled at the ends of the service, not inbetween.[12]In rare cases, a steam locomotive might be all that was available, usually aBlack 5. In either case there could be no multiple working and an extra crew was required.
The first Condor services began in the Spring of 1959. The service was not an immediate success. By August 1959, the formation had been cut in half, now being hauled by only a single locomotive. Traffic grew though and within a year it was running at full traffic capacity.[13]
In 1961, the unreliable Metro-Vicks were all withdrawn temporarily for their engines to be refurbished by the makers,Crossley, in the hope of avoiding their problems. A further problem had developed, that of crankcase cracking in one particular corner.[10]Derby-Sulzer Type 2s, later renumbered as theClass 24s, took over the Condor,[14]with LMS Black Fives serving as a stopgap till a sufficient number of Type 2s were available.[citation needed]
When the Class 28s returned, they had also had their distinctive wrap-around windshields replaced with flat glass, which no longer tended to fall out. The class was redeployed to theBarrow depot, where they worked out the rest of their short careers mostly on passenger services until they were all withdrawn by 1968.[v]
In 1963 an additional service fromBirminghamto Glasgow was added.[1]This ran fromAstonin Birmingham to Glasgow. Class 24s were the usual motive power from its introduction on 17 January 1963, when D5082 hauled the Down train and D5083 the up train, until replaced by the firstFreightlinerservice in 1965.[15]
After the Class 28s and Class 24s, the Condor was hauled by a single Type 4 locomotive.
Condor was successful, and to some extent this individual service became a victim of its own success.Richard Beeching's 1963 reportThe Reshaping of British Railwaysis better known for the cuts it imposed on the branch line network, but it also advocated a shift in almost all freight traffic to replace wagonload traffic with container services.[16]However these containers would be the newly popularstackable rectangular containers, rather than the older railway standard containers, as used by Condor.
In the mid-1960s, BR's emphasis shifted to the newFreightlinerservice. Beeching's plan was for a national network of 55 container depots and by 1968, 17 of these were in operation, including Gushetfauld. Condor was withdrawn in 1965. Most of the early adopters were existing customers, sending bulk trainload cargoes, although now packed into containers. An important one wasFord, who used this to integrate car production across Europe, shipping bodyshells for final assembly across the Channel, by theDover–Dunkerque train ferry. The introduction of the newTOPScomputer system also allowed all operations to be tracked as registered freight, between all depots.
Theheadboardwas a British Railways Type 6. It was unique in two aspects: the backplate colour was in two colours, and the text was in a 'stencil' typeface, with vertical breaks in each letter. The two colours were maroon (left) for theLondon Midland Regionand pale blue for theScottish Region.[17]
Railway artistTerence Cuneoproduced a poster,Night Freightfor BR(M), showing a Metro-Vick hauledCondorcrossing a Black 5 steam loco, outside a coaling depot.[18][19][20]
|
https://en.wikipedia.org/wiki/Condor_(express_freight)
|
Hauntology(aportmanteauofhauntingandontology, alsospectral studies,spectralities, or thespectral turn) is a range of ideas referring to the return or persistence of elements from the social or cultural past, as in the manner of a ghost. The term is aneologismfirst introduced by French philosopherJacques Derridain his 1993 bookSpectres of Marx. It has since been invoked in fields such as visual arts, philosophy,electronic music,anthropology, criminology,[1]politics, fiction, andliterary criticism.[2]
WhileChristine Brooke-Rosehad previously punned "dehauntological" (on "deontological") inAmalgamemnon(1984),[3][original research?]Derrida initially used "hauntology" for his idea of the atemporal nature ofMarxismand its tendency to "haunt Western society from beyond the grave".[4]It describes a situation of temporal andontologicaldisjunction in whichpresence, especially socially and culturally, is replaced by adeferrednon-origin.[2]The concept is derived fromdeconstruction, in which any attempt to locate the origin ofidentityor history must inevitably find itself dependent on analways-alreadyexisting set of linguistic conditions.[5]Despite being the central focus ofSpectres of Marx, the word hauntology appears only three times in the book, and there is little consistency in how other writers define the term.[6]
In the 2000s, the term wasapplied to musiciansby theoristsSimon ReynoldsandMark Fisher, who were said to explore ideas related to temporal disjunction,retrofuturism,cultural memory, and the persistence of the past.
Hauntology has been used as a critical lens in various forms of media andtheory, including music, aesthetics,political theory, architecture,Africanfuturism,Afrofuturism,Neo-futurism,Metamodernism,anthropology, andpsychoanalysis.[2][failed verification][7][page needed]Due to the difficulty in understanding the concept, there is little consistency in how other writers define the term.[6]
Hauntingsandghost storieshave existed for millennia, and reached a heydayin the West during the 19th century.[8]Incultural studies,Terry Castle(inThe Apparitional Lesbian) andAnthony Vidler(inThe Architectural Uncanny) predate Derrida.[9]
"Hauntology" originates from Derrida's discussion ofKarl MarxinSpectres of Marx, specifically Marx's proclamation that "aspectreis haunting Europe—the spectre of communism" inThe Communist Manifesto. Derrida calls on Shakespeare'sHamlet, particularly a phrase spoken by the titular character: "the time is out of joint".[5]The word functions as a deliberate near-homophoneto "ontology" in Derrida's native French (cf."hantologie",[ɑ̃tɔlɔʒi]and"ontologie",[ɔ̃tɔlɔʒi]).[10]
Derrida's prior work ondeconstruction, on concepts oftraceanddifférancein particular, serves as the foundation of his formulation of hauntology,[2]fundamentally asserting that there is no temporal point of pure origin but only an "always-alreadyabsent present".[11]Derrida sees hauntology as not only more powerful than ontology, but that "it would harbor within itselfeschatologyandteleologythemselves".[12]His writing inSpectresis marked by a preoccupation with the "death" ofcommunismafter the1991 fall of the Soviet Union, in particular after theorists such asFrancis Fukuyamaasserted thatcapitalismhad conclusively triumphed over other political-economic systems and reached the"end of history".[5]
Despite being the central focus ofSpectres of Marx, the word hauntology appears only three times in the book.[6]Peter Buse and Andrew Scott, discussing Derrida's notion of hauntology, explain:
Ghosts arrive from the past and appear in the present. However, the ghost cannot be properly said to belong to the past .... Does then the 'historical' person who is identified with the ghost properly belong to the present? Surely not, as the idea of a return from death fractures all traditional conceptions of temporality. The temporality to which the ghost is subject is thereforeparadoxical, at once they 'return' and make their apparitional debut [...] any attempt to isolate the origin of language will find its inaugural moment already dependent upon a system of linguistic differences that have been installed prior to the 'originary' moment (11).[5]
In the 2000s, the term was taken up by critics in reference to paradoxes found inpostmodernity, particularly contemporary culture's persistent recycling ofretro aestheticsand incapacity to escape old social forms.[5]Writers such asMark FisherandSimon Reynoldsused the term to describe amusical aestheticpreoccupied with this temporal disjunction and thenostalgiafor "lost futures".[4]So-called "hauntological" musicians are described as exploring ideas related to temporal disjunction,retrofuturism,cultural memory, and the persistence of the past.[13][14][5]
Anthropology has seen a widespread usage of hauntology as amethodologyacrossethnography,archaeology, andpsychological anthropology. In 2019Ethos, the journal of theSociety for Psychological Anthropologydedicated a full issue to hauntology, titledHauntology in Psychological Anthropology, and numerous books and journal articles have since appeared on the topic. In a book titledThe Hauntology of Everyday Life, psychological anthropologist Sadeq Rahimi asserts, "the very experience ofeveryday lifeis built around a process that we can call hauntogenic, and whose major by-product is a steady stream of ghosts."[15]
Justin Armstrong, building on Derrida, proposes a "spectralethnography" that "sees beyond the boundaries of actually spoken language and direct human contact to the interplay between space, place, objects, andtemporality".[16]Jeff Ferrell and Theo Kidynis, building on Armstrong, have developed further ideas of "ghost ethnography".[17]
Anthropologists Martha and Bruce Lincoln make a distinction between primary hauntings, in which the haunted recognize the reality and autonomy of metaphysical entities in relatively uncritical, literal manner; and secondary hauntings, which identify "textual residues" history, or as tropes for "collective intrapsychic states" such astraumaand grief. As a case study, they use the example ofBa Chúc's secondary haunting, in which the state-controlled museums display the skulls of the dead and memorabilia, as opposed to traditional Vietnamese burial customs. This is contrasted withthe "primary haunting" of Ba Chúc, the paranormal activity said to occur at an execution site marked by a tree.[18]
Kit Bauserman notes that forliteraryandcritical theorists, the ghost is "pure metaphor" and "a fictional vessel that co-opts their social agenda", whereas ethnographers and anthropologists "come the closest to engaging ghosts as beings".[19]Some scholars have argued that the "neat distinction quickly breaks down in ethnographic analysis" and that "it is far from clear that the presence of ghosts as metaphysical entities is primary."[20]
|
https://en.wikipedia.org/wiki/Hauntology
|
-logyis asuffixin the English language, used with words originally adapted fromAncient Greekending in-λογία(-logía).[1]The earliest English examples were anglicizations of the French-logie, which was in turn inherited from theLatin-logia.[2]The suffix became productive in English from the 18th century, allowing the formation of new terms with no Latin or Greek precedent.
The English suffix has two separate main senses, reflecting two sources of the-λογίαsuffix in Greek:[3]
Philologyis an exception: while its meaning is closer to the first sense, the etymology of the word is similar to the second sense.[8]
In English names for fields of study, the suffix-logyis most frequently found preceded by the euphonic connective voweloso that the word ends in-ology.[9]In these Greek words, therootis always a noun and-o-is thecombining vowelfor all declensions of Greek nouns. However, when new names for fields of study are coined in modern English, the formations ending in-logyalmost always add an-o-, except when the root word ends in an "l" or a vowel, as in these exceptions:[10]analogy,dekalogy,disanalogy,genealogy,genethlialogy,hexalogy;herbalogy(a variant ofherbology),mammalogy,mineralogy,paralogy,petralogy(a variant ofpetrology);elogy;heptalogy;antilogy,festilogy;trilogy,tetralogy,pentalogy;palillogy,pyroballogy;dyslogy;eulogy; andbrachylogy.[7]Linguists sometimes jokingly refer tohaplologyashaplogy(subjecting the wordhaplologyto the process of haplology itself).
Permetonymy, words ending in-logyare sometimes used to describe a subject rather than the study of it (e.g.,technology). This usage is particularly widespread in medicine; for example,pathologyis often used simply to refer to "the disease" itself (e.g., "We haven't found the pathology yet") rather than "the study of a disease".
Books, journals, and treatises about a subject also often bear the name of this subject (e.g., the scientific journalEcology).
When appended to other English words, the suffix can also be used humorously to createnonce words(e.g.,beerologyas "the study of beer"). As with otherclassical compounds, adding the suffix to an initial word-stem derived from Greek orLatinmay be used to lend grandeur or the impression of scientific rigor to humble pursuits, as incosmetology("the study of beauty treatment") orcynology("the study of dog training").
The -logy or -ology suffix is commonly used to indicate finite series of art works like books or movies. For paintings, the "tych" suffix is more common (e.g.diptych,triptych). Examples include:
Further terms like duology (two, mostly ingenre fiction) quadrilogy (four) and octalogy (eight) have been coined but are rarely used: for a series of 10, sometimes "decalog" is used (e.g. in theVirgin Decalog) instead of "decalogy".
|
https://en.wikipedia.org/wiki/-ology
|
Pretty Good Privacy(PGP) is anencryption programthat providescryptographicprivacyandauthenticationfordata communication. PGP is used forsigning, encrypting, and decrypting texts,e-mails, files, directories, and whole disk partitions and to increase thesecurityof e-mail communications.Phil Zimmermanndeveloped PGP in 1991.[4]
PGP and similar software follow the OpenPGP standard (RFC 4880), anopen standardforencryptingand decryptingdata. Modern versions of PGP areinteroperablewithGnuPGand other OpenPGP-compliant systems.[5]
The OpenPGP standard has received criticism for its long-lived keys and steep learning curve,[6]as well as theEfailsecurity vulnerability that previously arose when select e-mail programs used OpenPGP with S/MIME.[7][8]The new OpenPGP standard (RFC 9580) has also been criticised by the maintainer ofGnuPGWerner Koch, who in response created his own specification LibrePGP.[9]This response was dividing, with some embracing his alternative specification,[10]and others considering it to be insecure.[11]
PGP encryption uses a serial combination ofhashing,data compression,symmetric-key cryptography, and finallypublic-key cryptography; each step uses one of several supportedalgorithms. Each public key is bound to a username or an e-mail address. The first version of this system was generally known as aweb of trustto contrast with theX.509system, which uses a hierarchical approach based oncertificate authorityand which was added to PGP implementations later. Current versions of PGP encryption include options through an automated key management server.
Apublic key fingerprintis a shorter version of a public key. From a fingerprint, someone can validate the correct corresponding public key. A fingerprint such as C3A6 5E46 7B54 77DF 3C4C 9790 4D22 B3CA 5B32 FF66 can be printed on a business card.[12][13]
As PGP evolves, versions that support newer features andalgorithmscan create encrypted messages that older PGP systems cannot decrypt, even with a valid private key. Therefore, it is essential that partners in PGP communication understand each other's capabilities or at least agree on PGP settings.[14]
PGP can be used to send messages confidentially.[15]For this, PGP uses ahybrid cryptosystemby combiningsymmetric-key encryptionand public-key encryption. The message is encrypted using a symmetric encryption algorithm, which requires asymmetric keygenerated by the sender. The symmetric key is used only once and is also called asession key. The message and its session key are sent to the receiver. The session key must be sent to the receiver so they know how to decrypt the message, but to protect it during transmission it is encrypted with the receiver's public key. Only the private key belonging to the receiver can decrypt the session key, and use it to symmetrically decrypt the message.
PGP supports message authentication and integrity checking. The latter is used to detect whether a message has been altered since it was completed (themessage integrityproperty) and the former, to determine whether it was actually sent by the person or entity claimed to be the sender (adigital signature). Because the content is encrypted, any changes in the message will fail the decryption with the appropriate key. The sender uses PGP to create a digital signature for the message with one of several supported public-key algorithms. To do so, PGP computes ahash, or digest, from the plaintext and then creates the digital signature from that hash using the sender's private key.
Both when encrypting messages and when verifying signatures, it is critical that the public key used to send messages to someone or some entity actually does 'belong' to the intended recipient. Simply downloading a public key from somewhere is not a reliable assurance of that association; deliberate (or accidental) impersonation is possible. From its first version, PGP has always included provisions for distributing user's public keys in an 'identity certification', which is also constructed cryptographically so that any tampering (or accidental garble) is readily detectable. However, merely making a certificate that is impossible to modify without being detected is insufficient; this can prevent corruption only after the certificate has been created, not before. Users must also ensure by some means that the public key in a certificate actually does belong to the person or entity claiming it. A given public key (or more specifically, information binding a user name to a key) may be digitally signed by a third-party user to attest to the association between someone (actually a user name) and the key. There are several levels of confidence that can be included in such signatures. Although many programs read and write this information, few (if any) include this level of certification when calculating whether to trust a key.
The web of trust protocol was first described by Phil Zimmermann in 1992, in the manual for PGP version 2.0:
As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.
The web of trust mechanism has advantages over a centrally managedpublic key infrastructurescheme such as that used byS/MIMEbut has not been universally used. Users have to be willing to accept certificates and check their validity manually or have to simply accept them. No satisfactory solution has been found for the underlying problem.
In the (more recent) OpenPGP specification,trust signaturescan be used to support creation ofcertificate authorities. A trust signature indicates both that the key belongs to its claimed owner and that the owner of the key is trustworthy to sign other keys at one level below their own. A level 0 signature is comparable to a web of trust signature since only the validity of the key is certified. A level 1 signature is similar to the trust one has in a certificate authority because a key signed to level 1 is able to issue an unlimited number of level 0 signatures. A level 2 signature is highly analogous to the trust assumption users must rely on whenever they use the default certificate authority list (like those included in web browsers); it allows the owner of the key to make other keys certificate authorities.
PGP versions have always included a way to cancel ('revoke') public key certificates. A lost or compromised private key will require this if communication security is to be retained by that user. This is, more or less, equivalent to thecertificate revocation listsof centralised PKI schemes. Recent PGP versions have also supported certificate expiration dates.
The problem of correctly identifying a public key as belonging to a particular user is not unique to PGP. All public key/private key cryptosystems have the same problem, even if in slightly different guises, and no fully satisfactory solution is known. PGP's original scheme at least leaves the decision as to whether or not to use its endorsement/vetting system to the user, while most other PKI schemes do not, requiring instead that every certificate attested to by a centralcertificate authoritybe accepted as correct.
To the best of publicly available information, there is no known method which will allow a person or group to break PGP encryption by cryptographic or computational means. Indeed, in 1995,cryptographerBruce Schneiercharacterized an early version as being "the closest you're likely to get to military-grade encryption."[16]Early versions of PGP have been found to have theoretical vulnerabilities and so current versions are recommended.[17]In addition to protectingdata in transitover a network, PGP encryption can also be used to protect data in long-term data storage such as disk files. These long-term storage options are also known as data at rest, i.e. data stored, not in transit.
The cryptographic security of PGP encryption depends on the assumption that the algorithms used are unbreakable by directcryptanalysiswith current equipment and techniques.
In the original version, theRSAalgorithm was used to encrypt session keys. RSA's security depends upon theone-way functionnature of mathematicalinteger factoring.[18]Similarly, the symmetric key algorithm used in PGP version 2 wasIDEA, which might at some point in the future be found to have previously undetected cryptanalytic flaws. Specific instances of current PGP or IDEA insecurities (if they exist) are not publicly known. As current versions of PGP have added additional encryption algorithms, their cryptographic vulnerability varies with the algorithm used. However, none of the algorithms in current use are publicly known to have cryptanalytic weaknesses.
New versions of PGP are released periodically and vulnerabilities fixed by developers as they come to light. Any agency wanting to read PGP messages would probably use easier means than standard cryptanalysis, e.g.rubber-hose cryptanalysisorblack-bag cryptanalysis(e.g. installing some form oftrojan horseorkeystroke loggingsoftware/hardware on the target computer to capture encryptedkeyringsand their passwords). TheFBIhas already used this attack against PGP[19][20]in its investigations. However, any such vulnerabilities apply not just to PGP but to any conventional encryption software.
In 2003, an incident involving seizedPsionPDAsbelonging to members of theRed Brigadeindicated that neither theItalian policenor the FBI were able to decrypt PGP-encrypted files stored on them.[21][unreliable source?]
A second incident in December 2006, (seeIn re Boucher), involvingUS customs agentswho seized alaptop PCthat allegedly containedchild pornography, indicates that US government agencies find it "nearly impossible" to access PGP-encrypted files. Additionally, a magistrate judge ruling on the case in November 2007 has stated that forcing the suspect to reveal his PGP passphrase would violate hisFifth Amendmentrights i.e. a suspect's constitutional right not to incriminate himself.[22][23]The Fifth Amendment issue was opened again as the government appealed the case, after which a federal district judge ordered the defendant to provide the key.[24]
Evidence suggests that as of 2007[update],British policeinvestigators are unable to break PGP,[25]so instead have resorted to usingRIPAlegislation to demand the passwords/keys. In November 2009 a British citizen was convicted under RIPA legislation and jailed for nine months for refusing to provide police investigators with encryption keys to PGP-encrypted files.[26]
PGP as acryptosystemhas been criticized for complexity of the standard, implementation and very low usability of the user interface[27]including by recognized figures in cryptography research.[28][29]It uses an ineffective serialization format for storage of both keys and encrypted data, which resulted in signature-spamming attacks on public keys of prominent developers ofGNU Privacy Guard. Backwards compatibility of the OpenPGP standard results in usage of relatively weak default choices of cryptographic primitives (CAST5cipher,CFBmode, S2K password hashing).[30]The standard has been also criticized for leaking metadata, usage of long-term keys and lack offorward secrecy. Popular end-user implementations have suffered from various signature-striping, cipher downgrade and metadata leakage vulnerabilities which have been attributed to the complexity of the standard.[31]
Phil Zimmermanncreated the first version of PGP encryption in 1991. The name, "Pretty Good Privacy" was inspired by the name of agrocerystore, "Ralph's Pretty Good Grocery", featured in radio hostGarrison Keillor's fictional town,Lake Wobegon.[32]This first version included asymmetric-key algorithmthat Zimmermann had designed himself, namedBassOmaticafter aSaturday Night Livesketch. Zimmermann had been a long-timeanti-nuclear activist, and created PGP encryption so that similarly inclined people might securely useBBSsand securely store messages and files. No license fee was required for its non-commercial use, and the completesource codewas included with all copies.
In a posting of June 5, 2001, entitled "PGP Marks 10th Anniversary",[33]Zimmermann describes the circumstances surrounding his release of PGP:
It was on this day in 1991 that I sent the first release of PGP to a couple of my friends for uploading to the Internet. First, I sent it to Allan Hoeltje, who posted it to Peacenet, an ISP that specialized in grassroots political organizations, mainly in the peace movement. Peacenet was accessible to political activists all over the world. Then, I uploaded it to Kelly Goen, who proceeded to upload it to a Usenet newsgroup that specialized in distributing source code. At my request, he marked the Usenet posting as "US only". Kelly also uploaded it to many BBS systems around the country. I don't recall if the postings to the Internet began on June 5th or 6th.
It may be surprising to some that back in 1991, I did not yet know enough about Usenet newsgroups to realize that a "US only" tag was merely an advisory tag that had little real effect on how Usenet propagated newsgroup postings. I thought it actually controlled how Usenet routed the posting. But back then, I had no clue how to post anything on a newsgroup, and didn't even have a clear idea what a newsgroup was.
PGP found its way onto theInternetand rapidly acquired a considerable following around the world. Users and supporters included dissidents in totalitarian countries (some affecting letters to Zimmermann have been published, some of which have been included in testimony before the US Congress),civil libertariansin other parts of the world (see Zimmermann's published testimony in various hearings), and the 'free communications' activists who called themselvescypherpunks(who provided both publicity and distribution); decades later,CryptoPartyactivists did much the same viaTwitter.
Shortly after its release, PGP encryption found its way outside theUnited States, and in February 1993 Zimmermann became the formal target of a criminal investigation by the US Government for "munitionsexport without a license". At the time, cryptosystems using keys larger than40 bitswere considered munitions within the definition of theUS export regulations; PGP has never used keys smaller than 128 bits, so it qualified at that time. Penalties for violation, if found guilty, were substantial. After several years, the investigation of Zimmermann was closed without filing criminal charges against him or anyone else.
Zimmermann challenged these regulations in an imaginative way. In 1995, he published the entiresource codeof PGP in a hardback book,[34]viaMIT Press, which was distributed and sold widely. Anybody wishing to build their own copy of PGP could cut off the covers, separate the pages, and scan them using anOCRprogram (or conceivably enter it as atype-in programif OCR software was not available), creating a set of source code text files. One could then build the application using the freely availableGNU Compiler Collection. PGP would thus be available anywhere in the world. The claimed principle was simple: export ofmunitions—guns, bombs, planes, and software—was (and remains) restricted; but the export ofbooksis protected by theFirst Amendment. The question was never tested in court with respect to PGP. In cases addressing other encryption software, however, two federal appeals courts have established the rule that cryptographic software source code is speech protected by the First Amendment (theNinth Circuit Court of Appealsin theBernstein caseand theSixth Circuit Court of Appealsin theJunger case).
US export regulationsregarding cryptography remain in force, but were liberalized substantially throughout the late 1990s. Since 2000, compliance with the regulations is also much easier. PGP encryption no longer meets the definition of a non-exportable weapon, and can be exported internationally except to seven specific countries and a list of named groups and individuals[35](with whom substantially all US trade is prohibited under various US export controls).
The criminal investigation was dropped in 1996.[36]
During this turmoil, Zimmermann's team worked on a new version of PGP encryption called PGP 3. This new version was to have considerable security improvements, including a new certificate structure that fixed small security flaws in the PGP 2.x certificates as well as permitting a certificate to include separate keys for signing and encryption. Furthermore, the experience with patent and export problems led them to eschew patents entirely. PGP 3 introduced the use of theCAST-128(a.k.a. CAST5) symmetric key algorithm, and theDSAandElGamalasymmetric key algorithms, all of which were unencumbered by patents.
After the Federal criminal investigation ended in 1996, Zimmermann and his team started a company to produce new versions of PGP encryption. They merged with Viacrypt (to whom Zimmermann had sold commercial rights and who hadlicensedRSA directly fromRSADSI), which then changed its name to PGP Incorporated. The newly combined Viacrypt/PGP team started work on new versions of PGP encryption based on the PGP 3 system. Unlike PGP 2, which was an exclusivelycommand lineprogram, PGP 3 was designed from the start as asoftware libraryallowing users to work from a command line or inside aGUIenvironment. The original agreement between Viacrypt and the Zimmermann team had been that Viacrypt would have even-numbered versions and Zimmermann odd-numbered versions. Viacrypt, thus, created a new version (based on PGP 2) that they called PGP 4. To remove confusion about how it could be that PGP 3 was the successor to PGP 4, PGP 3 was renamed and released as PGP 5 in May 1997.
In December 1997, PGP Inc. was acquired byNetwork Associates, Inc.("NAI"). Zimmermann and the PGP team became NAI employees. NAI was the first company to have a legal export strategy by publishing source code. Under NAI, the PGP team added disk encryption, desktop firewalls, intrusion detection, andIPsecVPNsto the PGP family. After the export regulation liberalizations of 2000 which no longer required publishing of source, NAI stopped releasing source code.[37]
In early 2001, Zimmermann left NAI. He served as Chief Cryptographer forHush Communications, who provide an OpenPGP-based e-mail service,Hushmail. He has also worked with Veridis and other companies. In October 2001, NAI announced that its PGP assets were for sale and that it was suspending further development of PGP encryption. The only remaining asset kept was the PGP E-Business Server (the original PGP Commandline version). In February 2002, NAI canceled all support for PGP products, with the exception of the renamed commandline product.[38][39]
NAI, now known asMcAfee, continued to sell and support the commandline product under the name McAfee E-Business Server until 2013.[40]In 2010,Intel CorporationacquiredMcAfee. In 2013, the McAfee E-Business Server was transferred to Software Diversified Services (SDS), which now sells, supports, and develops it under the name SDS E-Business Server.[40][38]
For the enterprise, Townsend Security currently[when?]offers a commercial version of PGP for theIBM iandIBM zmainframe platforms. Townsend Security partnered with Network Associates in 2000 to create a compatible version of PGP for the IBM i platform. Townsend Security again ported PGP in 2008, this time to the IBM z mainframe. This version of PGP relies on a free z/OS encryption facility, which utilizes hardware acceleration. SDS also offers a commercial version of PGP (SDS E-Business Server) for theIBM zmainframe.
In August 2002, several ex-PGP team members formed a new company,PGP Corporation, and bought the PGP assets (except for the command line version) from NAI. The new company was funded by Rob Theis of Doll Capital Management (DCM) and Terry Garnett of Venrock Associates. PGP Corporation supported existing PGP users and honored NAI's support contracts. Zimmermann served as a special advisor and consultant to PGP Corporation while continuing to run his own consulting company. In 2003, PGP Corporation created a new server-based product called PGP Universal. In mid-2004, PGP Corporation shipped its own command line version called PGP Command Line, which integrated with the other PGP Encryption Platform applications. In 2005, PGP Corporation made its first acquisition: theGermansoftware company Glück & Kanja Technology AG,[41]which became PGP Deutschland AG.[42]In 2010, PGP Corporation acquired Hamburg-based certificate authority TC TrustCenter and its parent company,ChosenSecurity, to form its PGP TrustCenter[43]division.[44]
After the 2002 purchase of NAI's PGP assets, PGP Corporation offered worldwide PGP technical support from its offices inDraper, Utah;Offenbach,Germany; andTokyo,Japan.
On April 29, 2010,Symantec Corp.announced that it would acquire PGP Corporation for $300 million with the intent of integrating it into its Enterprise Security Group.[45]This acquisition was finalized and announced to the public on June 7, 2010. The source code of PGP Desktop 10 is available for peer review.[46]
In May 2018, a bug namedEFAILwas discovered in certain implementations of PGP which from 2003 could reveal the plaintext contents of emails encrypted with it.[47][48]The chosen mitigation for this vulnerability in PGP Desktop is to mandate the useSEIPprotected packets in the ciphertext, which can lead to old emails or other encrypted objects to be no longer decryptable after upgrading to the software version that has the mitigation.[49]
On August 9, 2019,Broadcom Inc.announced they would be acquiring the Enterprise Security software division of Symantec, which includes PGP Corporation.
While originally used primarily for encrypting the contents of e-mail messages and attachments from a desktop client, PGP products have been diversified since 2002 into a set of encryption applications that can be managed by an optional central policy server. PGP encryption applications include e-mails and attachments, digital signatures, full disk encryption, file and folder security, protection for IM sessions, batch file transfer encryption, and protection for files and folders stored on network servers and, more recently, encrypted or signed HTTP request/responses by means of a client-side (Enigform) and a server-side (mod openpgp) module. There is also a WordPress plugin available, called wp-enigform-authentication, that takes advantage of the session management features of Enigform with mod_openpgp.
The PGP Desktop 9.x family includes PGP Desktop Email, PGP Whole Disk Encryption, and PGP NetShare. Additionally, a number of Desktop bundles are also available. Depending on the application, the products feature desktop e-mail, digital signatures, IM security, whole disk encryption, file, and folder security, encryptedself-extracting archives, andsecure shreddingof deleted files. Capabilities are licensed in different ways depending on the features required.
The PGP Universal Server 2.x management console handles centralized deployment, security policy, policy enforcement, key management, and reporting. It is used for automated e-mail encryption in the gateway and manages PGP Desktop 9.x clients. In addition to its localkeyserver, PGP Universal Server works with the PGP public keyserver—called the PGP Global Directory—to find recipient keys. It has the capability of delivering e-mail securely when no recipient key is found via a secure HTTPS browser session.
With PGP Desktop 9.x managed by PGP Universal Server 2.x, first released in 2005, all PGP encryption applications are based on a new proxy-based architecture. These newer versions of PGP software eliminate the use of e-mail plug-ins and insulate the user from changes to other desktop applications. All desktop and server operations are now based on security policies and operate in an automated fashion. The PGP Universal server automates the creation, management, and expiration of keys, sharing these keys among all PGP encryption applications.
The Symantec PGP platform has now undergone a rename. PGP Desktop is now known as Symantec Encryption Desktop (SED), and the PGP Universal Server is now known as Symantec Encryption Management Server (SEMS). The current shipping versions are Symantec Encryption Desktop 10.3.0 (Windows and macOS platforms) and Symantec Encryption Server 3.3.2.
Also available are PGP Command-Line, which enables command line-based encryption and signing of information for storage, transfer, and backup, as well as the PGP Support Package for BlackBerry which enables RIM BlackBerry devices to enjoy sender-to-recipient messaging encryption.
New versions of PGP applications use both OpenPGP and theS/MIME, allowing communications with any user of aNISTspecified standard.[50]
Within PGP Inc., there was still concern surrounding patent issues. RSADSI was challenging the continuation of the Viacrypt RSA license to the newly merged firm. The company adopted an informal internal standard that they called "Unencumbered PGP" which would "use no algorithm with licensing difficulties". Because of PGP encryption's importance worldwide, many wanted to write their own software that would interoperate with PGP 5. Zimmermann became convinced that anopen standardfor PGP encryption was critical for them and for the cryptographic community as a whole. In July 1997, PGP Inc. proposed to theIETFthat there be a standard called OpenPGP. They gave the IETF permission to use the name OpenPGP to describe this new standard as well as any program that supported the standard. The IETF accepted the proposal and started the OpenPGPWorking Group.
OpenPGP is on theInternet Standards Trackand is under active development. Many e-mail clients provide OpenPGP-compliant email security as described in RFC 3156. The current specification is RFC 9580 (July 2024), the successor to RFC 4880. RFC 9580 specifies a suite of required algorithms consisting ofX25519,Ed25519,SHA2-256andAES-128. In addition to these algorithms, the standard recommendsX448,Ed448,SHA2-384,SHA2-512andAES-256. Beyond these, many other algorithms are supported.
OpenPGP's encryption can ensure the secure delivery of files and messages, as well as provide verification of who created or sent the message using a process called digital signing. Theopen sourceoffice suiteLibreOfficeimplemented document signing with OpenPGP as of version 5.4.0 on Linux.[52]Using OpenPGP for communication requires participation by both the sender and recipient. OpenPGP can also be used to secure sensitive files when they are stored in vulnerable places like mobile devices or in the cloud.[53]
In late 2023, a schism occurred in the OpenPGP world: IETF's OpenPGP working group decided to choose a "crypto-refresh" update strategy for the RFC 4880 specification, rather than a more gradual "4880bis" path preferred by Werner Koch, author of GnuPG. As a result, Koch took his draft, now abandoned by the workgroup, and forked it into a "LibrePGP" specification.[9]
TheFree Software Foundationhas developed its own OpenPGP-compliant software suite calledGNU Privacy Guard, freely available together with all source code under theGNU General Public Licenseand is maintained separately from severalgraphical user interfacesthat interact with the GnuPG library for encryption, decryption, and signing functions (seeKGPG,Seahorse,MacGPG).[undue weight?–discuss]Several other vendors[specify]have also developed OpenPGP-compliant software.
The development of anopen sourceOpenPGP-compliant library, OpenPGP.js, written inJavaScriptand supported by theHorizon 2020 Framework Programmeof theEuropean Union,[54]has allowed web-based applications to use PGP encryption in the web browser.
PGP keys are supported inMozilla Thunderbird(Built-in in version 78 onwards on PC,[55]and with theOpenKeychainapp as of version 9 on Android[56]),GitHub,[57]andGitLab.[58]
With the advancement of cryptography, parts of PGP and OpenPGP have been criticized for being dated:
In October 2017, theROCA vulnerabilitywas announced, which affects RSA keys generated by buggy Infineon firmware used onYubikey4 tokens, often used with OpenPGP. Many published PGP keys were found to be susceptible.[60]Yubico offers free replacement of affected tokens.[61]
|
https://en.wikipedia.org/wiki/Pretty_Good_Privacy
|
Anobject databaseorobject-oriented databaseis adatabase management systemin which information is represented in the form ofobjectsas used inobject-oriented programming. Object databases are different fromrelational databaseswhich are table-oriented. A third type,object–relational databases, is a hybrid of both approaches.
Object databases have been considered since the early 1980s.[2]
Object-oriented database management systems (OODBMSs) also called ODBMS (Object Database Management System) combine database capabilities withobject-oriented programminglanguage capabilities.
OODBMSs allow object-oriented programmers to develop the product, store them as objects, and replicate or modify existing objects to make new objects within the OODBMS. Because the database is integrated with the programming language, the programmer can maintain consistency within one environment, in that both the OODBMS and the programming language will use the same model of representation. Relational DBMS projects, by way of contrast, maintain a clearer division between thedatabase modeland the application.
As the usage of web-based technology increases with the implementation of Intranets and extranets, companies have a vested interest in OODBMSs to display their complex data. Using a DBMS that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilizecomputer-aided design(CAD).[3]
Some object-oriented databases are designed to work well withobject-oriented programming languagessuch asDelphi,Ruby,Python,JavaScript,Perl,Java,C#,Visual Basic .NET,C++,Objective-CandSmalltalk; others such asJADEhave their own programming languages. OODBMSs use exactly the same model as object-oriented programming languages.
Object database management systems grew out of research during the early to mid-1970s into having intrinsic database management support for graph-structured objects. The term "object-oriented database system" first appeared around 1985.[4]Notable research projects included Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin–Madison), IRIS (Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology Corporationor MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION project had more published papers than any of the other efforts. Won Kim of MCC compiled the best of those papers in a book published by The MIT Press.[5]
Early commercial products includedGemstone(Servio Logic, name changed to GemStone Systems), Gbase (Graphael), and Vbase (Ontologic). Additional commercial products entered the market in the late 1980s through the mid 1990s. These included ITASCA (Itasca Systems), Jasmine (Fujitsu, marketed by Computer Associates), Matisse (Matisse Software),Objectivity/DB(Objectivity, Inc.),ObjectStore(Progress Software, acquired from eXcelon which was originallyObject Design, Incorporated), ONTOS (Ontos, Inc., name changed from Ontologic), O2[6](O2Technology, merged with several companies, acquired byInformix, which was in turn acquired byIBM), POET (nowFastObjectsfrom Versant which acquired Poet Software), Versant Object Database (VersantCorporation), VOSS (Logic Arts) andJADE(Jade Software Corporation). Some of these products remain on the market and have been joined by new open source and commercial products such asInterSystems Caché.
Object database management systems added the concept ofpersistenceto object programming languages. The early commercial products were integrated with various languages: GemStone (Smalltalk), Gbase (LISP), Vbase (COP) and VOSS (Virtual Object Storage System forSmalltalk). For much of the 1990s,C++dominated the commercial object database management market. Vendors addedJavain the late 1990s and more recently,C#.
Starting in 2004, object databases have seen a second growth period whenopen sourceobject databases emerged that were widely affordable and easy to use, because they are entirely written inOOPlanguages like Smalltalk, Java, or C#, such as Versant'sdb4o(db4objects), DTS/S1 from Obsidian Dynamics andPerst(McObject), available under dualopen sourceand commercial licensing.
Object databases based on persistent programming acquired a niche in application areas such as
engineering andspatial databases,telecommunications, and scientific areas such ashigh energy physics[13]andmolecular biology.[14]
Another group of object databases focuses on embedded use in devices, packaged software, andreal-timesystems.
Most object databases also offer some kind ofquery language, allowing objects to be found using adeclarative programmingapproach. It is in the area of object query languages, and the integration of the query and navigational interfaces, that the biggest differences between products are found. An attempt at standardization was made by theODMGwith theObject Query Language, OQL.
Access to data can be faster because an object can be retrieved directly without a search, by followingpointers.
Another area of variation between products is in the way that the schema of a database is defined. A general characteristic, however, is that the programming language and the database schema use the same type definitions.
Multimedia applications are facilitated because the class methods associated with the data are responsible for its correct interpretation.
Many object databases, for example Gemstone or VOSS, offer support forversioning. An object can be viewed as the set of all its versions. Also, object versions can be treated as objects in their own right. Some object databases also provide systematic support fortriggersand constraints which are the basis ofactive databases.
The efficiency of such a database is also greatly improved in areas which demand massive amounts of data about one item. For example, a banking institution could get the user's account information and provide them efficiently with extensive information such as transactions, account information entries etc.
TheObject Data Management Groupwas a consortium of object database and object–relational mapping vendors, members of the academic community, and interested parties. Its goal was to create a set of specifications that would allow for portable applications that store objects in database management systems. It published several versions of its specification. The last release was ODMG 3.0. By 2001, most of the major object database and object–relational mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to the other components of the specification was mixed. In 2001, the ODMG Java Language Binding was submitted to theJava Community Processas a basis for theJava Data Objectsspecification. The ODMG member companies then decided to concentrate their efforts on the Java Data Objects specification. As a result, the ODMG disbanded in 2001.
Many object database ideas were also absorbed intoSQL:1999and have been implemented in varying degrees inobject–relational databaseproducts.
In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce additional object-oriented query APIs but rather use the OO programming language itself, i.e., Java and .NET, to express queries. As a result,Native Queriesemerged. Similarly, Microsoft announcedLanguage Integrated Query(LINQ) and DLINQ, an implementation of LINQ, in September 2005, to provide close, language-integrated database query capabilities with its programming languages C# and VB.NET 9.
In February 2006, theObject Management Group(OMG) announced that they had been granted the right to develop new specifications based on the ODMG 3.0 specification and the formation of the Object Database Technology Working Group (ODBT WG). The ODBT WG planned to create a set of standards that would incorporate advances in object database technology (e.g., replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to include new features into these standards that support domains where object databases are being adopted (e.g., real-time systems). The work of the ODBT WG was suspended in March 2009 when, subsequent to the economic turmoil in late 2008, the ODB vendors involved in this effort decided to focus their resources elsewhere.
In January 2007 theWorld Wide Web Consortiumgave final recommendation status to theXQuerylanguage. XQuery usesXMLas its data model. Some of the ideas developed originally for object databases found their way into XQuery, but XQuery is not intrinsically object-oriented. Because of the popularity of XML, XQuery engines compete with object databases as a vehicle for storage of data that is too complex or variable to hold conveniently in a relational database. XQuery also allows modules to be written to provide encapsulation features that have been provided by Object-Oriented systems.
XQuery v1andXPath v2and later are powerful and are available in both open source and libre (FOSS) software,[15][16][17]as well as in commercial systems. They are easy to learn and use, and very powerful and fast. They are not relational and XQuery is not based on SQL (although one of the people who designed XQuery also co-invented SQL). But they are also not object-oriented, in the programming sense: XQuery does not use encapsulation with hiding, implicit dispatch, and classes and methods. XQuery databases generally use XML and JSON as an interchange format, although other formats are used.
Since the early 2000sJSONhas gained community adoption and popularity in applications where developers are in control of the data format.JSONiq, a query-analog of XQuery for JSON (sharing XQuery's core expressions and operations), demonstrated the functional equivalence of the JSON and XML formats for data-oriented information. In this context, the main strategy of OODBMS maintainers was to retrofit JSON to their databases (by using it as the internal data type).
In January 2016, with thePostgreSQL 9.5 release[18]was the first FOSS OODBMS to offer an efficient JSON internal datatype (JSONB) with a complete set of functions and operations, for all basic relational and non-relational manipulations.
An object database stores complex data and relationships between data directly, without mapping to relationalrowsandcolumns, and this makes them suitable for applications dealing with very complex data.[19]Objects have a many-to-many relationship and are accessed by the use of pointers. Pointers are linked to objects to establish relationships. Another benefit of an OODBMS is that it can be programmed with small procedural differences without affecting the entire system.[20]
|
https://en.wikipedia.org/wiki/Object_database
|
Inmathematics,racksandquandlesare sets withbinary operationssatisfying axioms analogous to theReidemeister movesused to manipulateknotdiagrams.
While mainly used to obtain invariants of knots, they can be viewed asalgebraicconstructions in their own right. In particular, the definition of a quandle axiomatizes the properties ofconjugationin agroup.
In 1942,Mituhisa Takasaki[ja]introduced an algebraic structure which he called akei(圭),[1][2]which would later come to be known as an involutive quandle.[3]His motivation was to find a nonassociative algebraic structure to capture the notion of areflectionin the context offinite geometry.[2][3]The idea was rediscovered and generalized in an unpublished 1959 correspondence betweenJohn ConwayandGavin Wraith, who at the time were undergraduate students at theUniversity of Cambridge. It is here that the modern definitions of quandles and of racks first appear. Wraith had become interested in these structures (which he initially dubbedsequentials) while at school.[4]Conway renamed themwracks, partly as a pun on his colleague's name, and partly because they arise as the remnants (or 'wrack and ruin') of agroupwhen one discards the multiplicative structure and considers only theconjugationstructure. The spelling 'rack' has now become prevalent.
These constructs surfaced again in the 1980s: in a 1982 paper byDavid Joyce[5](where the termquandle, an arbitrary nonsense word, was coined),[6]in a 1982 paper bySergei Matveev(under the namedistributivegroupoids)[7]and in a 1986 conference paper byEgbert Brieskorn(where they were calledautomorphicsets).[8]A detailed overview of racks and their applications in knot theory may be found in the paper byColin RourkeandRoger Fenn.[9]
Arackmay be defined as a setR{\displaystyle \mathrm {R} }with a binary operation◃{\displaystyle \triangleleft }such that for everya,b,c∈R{\displaystyle a,b,c\in \mathrm {R} }theself-distributive lawholds:
and for everya,b∈R,{\displaystyle a,b\in \mathrm {R} ,}there exists a uniquec∈R{\displaystyle c\in \mathrm {R} }such that
This definition, while terse and commonly used, is suboptimal for certain purposes because it contains an existential quantifier which is not really necessary. To avoid this, we may write the uniquec∈R{\displaystyle c\in \mathrm {R} }such thata◃c=b{\displaystyle a\triangleleft c=b}asb▹a.{\displaystyle b\triangleright a.}We then have
and thus
and
Using this idea, a rack may be equivalently defined as a setR{\displaystyle \mathrm {R} }with two binary operations◃{\displaystyle \triangleleft }and▹{\displaystyle \triangleright }such that for alla,b,c∈R:{\displaystyle a,b,c\in \mathrm {R} {\text{:}}}
It is convenient to say that the elementa∈R{\displaystyle a\in \mathrm {R} }is acting from the left in the expressiona◃b,{\displaystyle a\triangleleft b,}and acting from the right in the expressionb▹a.{\displaystyle b\triangleright a.}The third and fourth rack axioms then say that these left and right actions are inverses of each other. Using this, we can eliminate either one of these actions from the definition of rack. If we eliminate the right action and keep the left one, we obtain the terse definition given initially.
Many different conventions are used in the literature on racks and quandles. For example, many authors prefer to work with just therightaction. Furthermore, the use of the symbols◃{\displaystyle \triangleleft }and▹{\displaystyle \triangleright }is by no means universal: many authors use exponential notation
and
while many others write
Yet another equivalent definition of a rack is that it is a set where each element acts on the left and right asautomorphismsof the rack, with the left action being the inverse of the right one. In this definition, the fact that each element acts as automorphisms encodes the left and right self-distributivity laws, and also these laws:
which are consequences of the definition(s) given earlier.
Aquandleis defined as anidempotentrack,Q,{\displaystyle \mathrm {Q} ,}such that for alla∈Q{\displaystyle a\in \mathrm {Q} }
or equivalently
Every group gives a quandle where the operations come from conjugation:
In fact, every equational law satisfied byconjugationin a group follows from the quandle axioms. So, one can think of a quandle as what is left of a group when we forget multiplication, the identity, and inverses, and only remember the operation of conjugation.
Everytame knotinthree-dimensionalEuclidean spacehas a 'fundamental quandle'. To define this, one can note that thefundamental groupof the knot complement, orknot group, has a presentation (theWirtinger presentation) in which the relations only involve conjugation. So, this presentation can also be used as a presentation of a quandle. The fundamental quandle is a very powerful invariant of knots. In particular, if two knots haveisomorphicfundamental quandles then there is ahomeomorphismof three-dimensional Euclidean space, which may beorientation reversing, taking one knot to the other.
Less powerful but more easily computable invariants of knots may be obtained by counting the homomorphisms from the knot quandle to a fixed quandleQ.{\displaystyle \mathrm {Q} .}Since the Wirtinger presentation has one generator for each strand in aknot diagram, these invariants can be computed by counting ways of labelling each strand by an element ofQ,{\displaystyle \mathrm {Q} ,}subject to certain constraints. More sophisticated invariants of this sort can be constructed with the help of quandlecohomology.
TheAlexander quandlesare also important, since they can be used to compute theAlexander polynomialof a knot. LetA{\displaystyle \mathrm {A} }be a module over the ringZ[t,t−1]{\displaystyle \mathbb {Z} [t,t^{-1}]}ofLaurent polynomialsin one variable. Then theAlexander quandleisA{\displaystyle \mathrm {A} }made into a quandle with the left action given by
Racks are a useful generalization of quandles in topology, since while quandles can represent knots on a round linear object (such as rope or a thread), racks can represent ribbons, which may be twisted as well as knotted.
A quandleQ{\displaystyle \mathrm {Q} }is said to beinvolutoryif for alla,b∈Q,{\displaystyle a,b\in \mathrm {Q} ,}
or equivalently,
Anysymmetric spacegives an involutory quandle, wherea◃b{\displaystyle a\triangleleft b}is the result of 'reflectingb{\displaystyle b}througha{\displaystyle a}'.
|
https://en.wikipedia.org/wiki/Racks_and_quandles
|
Heterodox
Rational expectationsis an economic theory that seeks to infer themacroeconomicconsequences of individuals' decisions based on all available knowledge. It assumes that individuals' actions are based on the best available economic theory and information.
The concept of rational expectations was first introduced byJohn F. Muthin his paper "Rational Expectations and the Theory of Price Movements" published in 1961.Robert LucasandThomas Sargentfurther developed the theory in the 1970s and 1980s which became seminal works on the topic and were widely used inmicroeconomics.[1]
Significant Findings
Muth’swork introduces the concept of rational expectations and discusses its implications for economic theory. He argues that individuals are rational and use all available information to make unbiased, informed predictions about the future. This means that individuals do not make systematic errors in their predictions and that their predictions are not biased by past errors. Muth’s paper also discusses the implication of rational expectations for economic theory. One key implication is that government policies, such as changes in monetary or fiscal policy, may not be as effective if individuals’ expectations are not considered. For example, if individuals expect inflation to increase, they may anticipate that the central bank will raise interest rates to combat inflation, which could lead to higher borrowing costs and slower economic growth. Similarly, if individuals expect a recession, they may reduce their spending and investment, which could lead to aself-fulfilling prophecy.[2]
Lucas’ paper “Expectations and the Neutrality of Money” expands on Muth's work and sheds light on the relationship between rational expectations and monetary policy. The paper argues that when individuals hold rational expectations, changes in the money supply do not have real effects on the economy and the neutrality of money holds. Lucas presents a theoretical model that incorporates rational expectations into an analysis of the effects of changes in the money supply. The model suggests that individuals adjust their expectations in response to changes in the money supply, which eliminates the effect on real variables such as output and employment. He argues that a stable monetary policy that is consistent with individuals' rational expectations will be more effective in promoting economic stability than attempts to manipulate the money supply.[3]
In 1973,Thomas J Sargentpublished the article “Rational Expectations, the Real Rate of Interest, and the Natural Rate of Unemployment”, which was an important contribution to the development and application of the concept of rational expectations in economic theory and policy. By assuming individuals are forward-looking and rational, Sargent argues that rational expectations can help explain fluctuations in key economic variables such as the real interest rate and the natural rate of employment. He also suggests that the concept of the natural rate of unemployment can be used to help policymakers set macroeconomic policy. This concept suggests that there is a trade-off between unemployment and inflation in the short run, but in the long run, the economy will return to the natural rate of unemployment, which is determined by structural factors such as the skills of the labour force and the efficiency of the labour market. Sargent argues that policymakers should take this concept into account when setting macroeconomic policy, as policies that try to push unemployment below the natural rate will only lead to higher inflation in the long run.[4]
The key idea of rational expectations is that individuals make decisions based on all available information, including their own expectations about future events. This implies that individuals are rational and use all available information to make decisions. Another important idea is that individuals adjust their expectations in response to new information. In this way, individuals are assumed to be forward-looking and able to adapt to changing circumstances. They will learn from past trends and experiences to make their best guess of the future.[1]
It is assumed that an individual's predicted outcome do not differ systematically from the marketequilibriumgiven that they do not make systematic errors when predicting the future.
In an economic model, this is typically modelled by assuming that the expected value of a variable is equal to the expected value predicted by the model. For example, suppose thatPis the equilibrium price in a simple market, determined bysupply and demand. The theory of rational expectations implies that the actual price will only deviate from the expectation if there is an 'information shock' caused by information unforeseeable at the time expectations were formed. In other words,ex antethe price is anticipated to equal its rational expectation:
whereP∗{\displaystyle P^{*}}is the rational expectation andϵ{\displaystyle \epsilon }is the random error term, which has an expected value of zero, and is independent ofP∗{\displaystyle P^{*}}.
If rational expectations are applied to the Phillips curve analysis, the distinction between long and short term will be completely negated, that is, there is no Phillips curve, and there is no substitute relationship between inflation rate and unemployment rate that can be utilized.
The mathematical derivation is as follows:
Rational expectation is consistent with objective mathematical expectation:
EP˙t=P˙t+εt{\displaystyle E{\dot {P}}_{t}={\dot {P}}_{t}+\varepsilon _{t}}
Mathematical derivation (1)
We denote unemployment rate byut{\displaystyle u_{t}}. Assuming that the actual process is known, the rate of inflation (P˙t{\displaystyle {\dot {P}}_{t}}) depends on previous monetary changes (M˙t−1{\displaystyle {\dot {M}}_{t-1}}) and changes in short-term variables such as X (for example, oil prices):
(1)P˙=qM˙t−1+zX˙t−1+εt{\displaystyle {\dot {P}}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+\varepsilon _{t}}
Taking expected values,
(2)EP˙t=qM˙t−1+zX˙t−1{\displaystyle E{\dot {P}}_{t}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}}
On the other hand, inflation rate is related to unemployment by the Phillips curve:
(3)P˙t=α−βut+γEt−1(P˙t){\displaystyle {\dot {P}}_{t}=\alpha -\beta u_{t}+\gamma E_{t-1}({\dot {P}}_{t})},γ=1{\displaystyle \gamma =1}
Equating (1) and (3):
(4)α−βut+qM˙t−1+zX˙t−1=qM˙t−1+zX˙t−1+εt{\displaystyle \alpha -\beta u_{t}+q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+\varepsilon _{t}}
Cancelling terms and rearrangement gives
(5)ut=α−εtβ{\displaystyle u_{t}={\frac {\alpha -\varepsilon _{t}}{\beta }}}
Thus, even in the short run, there is no substitute relationship between inflation and unemployment. Random shocks, which are completely unpredictable, are the only reason why the unemployment rate deviates from the natural rate.
Mathematical derivation (2)
Even if the actual rate of inflation is dependent on current monetary changes, the public can make rational expectations as long as they know how monetary policy is being decided:
(1)P˙t=qM˙t+zX˙t−1+εt{\displaystyle {\dot {P}}_{t}=q{\dot {M}}_{t}+z{\dot {X}}_{t-1}+\varepsilon _{t}}
Denote the change due to monetary policy byμt{\displaystyle \mu _{t}}.
(2)M˙t=gM˙t−1+μt{\displaystyle {\dot {M}}_{t}=g{\dot {M}}_{t-1}+\mu _{t}}
We then substitute (2) into (1):
(3)P˙t=qgM˙t−1+zX˙t−1+qμt+εt{\displaystyle {\dot {P}}_{t}=qg{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+q\mu _{t}+\varepsilon _{t}}
Taking expected value at timet−1{\displaystyle t-1},
(4)Et−1P˙=qgM˙t−1+zX˙t−1{\displaystyle E_{t-1}{\dot {P}}=qg{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}}
Using the Phillips curve relation, cancelling terms on both sides and rearrangement gives
(5)ut=α−qμt−εtβ{\displaystyle u_{t}={\frac {\alpha -q\mu _{t}-\varepsilon _{t}}{\beta }}}
The conclusion is essentially the same: random shocks that are completely unpredictable are the only thing that can cause the unemployment rate to deviate from the natural rate.
Rational expectations theories were developed in response to perceived flaws in theories based onadaptive expectations. Under adaptive expectations, expectations of the future value of an economic variable are based on past values. For example, it assumes that individuals predict inflation by looking at historical inflation data. Under adaptive expectations, if the economy suffers from a prolonged period of rising inflation, people are assumed to always underestimate inflation. Many economists suggested that it was an unrealistic and irrational assumption, as they believe that rational individuals will learn from past experiences and trends and adjust their predictions accordingly.
The rational expectations hypothesis has been used to support conclusions about economic policymaking. An example is thepolicy ineffectiveness propositiondeveloped byThomas SargentandNeil Wallace. If the Federal Reserve attempts to lower unemployment through expansionarymonetary policy, economic agents will anticipate the effects of the change of policy and raise their expectations of future inflation accordingly. This will counteract the expansionary effect of the increased money supply, suggesting that the government can only increase the inflation rate but not employment.
If agents do not form rational expectations or if prices are not completely flexible, discretional and completely anticipated, economic policy actions can trigger real changes.[5]
While the rational expectations theory has been widely influential in macroeconomic analysis, it has also been subject to criticism:
Unrealistic assumptions: The theory implies that individuals are in a fixed point, where their expectations about aggregate economic variables on average are correct. This is unlikely to be the case, due to limited information available and human error.[6]
Limited empirical support: While there is some evidence that individuals do incorporate expectations into their decision-making, it is unclear whether they do so in the way predicted by the rational expectations theory.[6]
Misspecification of models: The rational expectations theory assumes that individuals have a common understanding of the model used to make predictions. However, if the model is misspecified, this can lead to incorrect predictions.[7]
Inability to explain certain phenomena:The theory is also criticized for its inability to explain certain phenomena, such as 'irrational' bubbles and crashes in financial markets.[8]
Lack of attention to distributional effects:Critics argue that the rational expectations theory focuses too much on aggregate outcomes and does not pay enough attention to the distributional effects of economic policies.[6]
|
https://en.wikipedia.org/wiki/Rational_expectations
|
Limited-memory BFGS(L-BFGSorLM-BFGS) is anoptimizationalgorithmin the family ofquasi-Newton methodsthat approximates theBroyden–Fletcher–Goldfarb–Shanno algorithm(BFGS) using a limited amount ofcomputer memory.[1]It is a popular algorithm for parameter estimation inmachine learning.[2][3]The algorithm's target problem is to minimizef(x){\displaystyle f(\mathbf {x} )}over unconstrained values of the real-vectorx{\displaystyle \mathbf {x} }wheref{\displaystyle f}is a differentiable scalar function.
Like the original BFGS, L-BFGS uses an estimate of the inverseHessian matrixto steer its search through variable space, but where BFGS stores a densen×n{\displaystyle n\times n}approximation to the inverse Hessian (nbeing the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with many variables. Instead of the inverse HessianHk, L-BFGS maintains a history of the pastmupdates of the positionxand gradient ∇f(x), where generally the history sizemcan be small (oftenm<10{\displaystyle m<10}). These updates are used to implicitly do operations requiring theHk-vector product.
The algorithm starts with an initial estimate of the optimal value,x0{\displaystyle \mathbf {x} _{0}}, and proceeds iteratively to refine that estimate with a sequence of better estimatesx1,x2,…{\displaystyle \mathbf {x} _{1},\mathbf {x} _{2},\ldots }. The derivatives of the functiongk:=∇f(xk){\displaystyle g_{k}:=\nabla f(\mathbf {x} _{k})}are used as a key driver of the algorithm to identify the direction of steepest descent, and also to form an estimate of the Hessian matrix (second derivative) off(x){\displaystyle f(\mathbf {x} )}.
L-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplicationdk=−Hkgk{\displaystyle d_{k}=-H_{k}g_{k}}is carried out, wheredk{\displaystyle d_{k}}is the approximate Newton's direction,gk{\displaystyle g_{k}}is the current gradient, andHk{\displaystyle H_{k}}is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called "two loop recursion."[4][5]
We take as givenxk{\displaystyle x_{k}}, the position at thek-th iteration, andgk≡∇f(xk){\displaystyle g_{k}\equiv \nabla f(x_{k})}wheref{\displaystyle f}is the function being minimized, and all vectors are column vectors. We also assume that we have stored the lastmupdates of the form
We defineρk=1yk⊤sk{\displaystyle \rho _{k}={\frac {1}{y_{k}^{\top }s_{k}}}}, andHk0{\displaystyle H_{k}^{0}}will be the 'initial' approximate of the inverse Hessian that our estimate at iterationkbegins with.
The algorithm is based on the BFGS recursion for the inverse Hessian as
For a fixedkwe define a sequence of vectorsqk−m,…,qk{\displaystyle q_{k-m},\ldots ,q_{k}}asqk:=gk{\displaystyle q_{k}:=g_{k}}andqi:=(I−ρiyisi⊤)qi+1{\displaystyle q_{i}:=(I-\rho _{i}y_{i}s_{i}^{\top })q_{i+1}}. Then a recursive algorithm for calculatingqi{\displaystyle q_{i}}fromqi+1{\displaystyle q_{i+1}}is to defineαi:=ρisi⊤qi+1{\displaystyle \alpha _{i}:=\rho _{i}s_{i}^{\top }q_{i+1}}andqi=qi+1−αiyi{\displaystyle q_{i}=q_{i+1}-\alpha _{i}y_{i}}. We also define another sequence of vectorszk−m,…,zk{\displaystyle z_{k-m},\ldots ,z_{k}}aszi:=Hiqi{\displaystyle z_{i}:=H_{i}q_{i}}. There is another recursive algorithm for calculating these vectors which is to definezk−m=Hk0qk−m{\displaystyle z_{k-m}=H_{k}^{0}q_{k-m}}and then recursively defineβi:=ρiyi⊤zi{\displaystyle \beta _{i}:=\rho _{i}y_{i}^{\top }z_{i}}andzi+1=zi+(αi−βi)si{\displaystyle z_{i+1}=z_{i}+(\alpha _{i}-\beta _{i})s_{i}}. The value ofzk{\displaystyle z_{k}}is then our ascent direction.
Thus we can compute the descent direction as follows:
This formulation gives the search direction for the minimization problem, i.e.,z=−Hkgk{\displaystyle z=-H_{k}g_{k}}. For maximization problems, one should thus take-zinstead. Note that the initial approximate inverse HessianHk0{\displaystyle H_{k}^{0}}is chosen as a diagonal matrix or even a multiple of the identity matrix since this is numerically efficient.
The scaling of the initial matrixγk{\displaystyle \gamma _{k}}ensures that the search direction is well scaled and therefore the unit step length is accepted in most iterations. AWolfe line searchis used to ensure that the curvature condition is satisfied and the BFGS updating is stable. Note that some software implementations use an Armijobacktracking line search, but cannot guarantee that the curvature conditionyk⊤sk>0{\displaystyle y_{k}^{\top }s_{k}>0}will be satisfied by the chosen step since a step length greater than1{\displaystyle 1}may be needed to satisfy this condition. Some implementations address this by skipping the BFGS update whenyk⊤sk{\displaystyle y_{k}^{\top }s_{k}}is negative or too close to zero, but this approach is not generally recommended since the updates may be skipped too often to allow the Hessian approximationHk{\displaystyle H_{k}}to capture important curvature information. Some solvers employ so called damped (L)BFGS update which modifies quantitiessk{\displaystyle s_{k}}andyk{\displaystyle y_{k}}in order to satisfy the curvature condition.
The two-loop recursion formula is widely used by unconstrained optimizers due to its efficiency in multiplying by the inverse Hessian. However, it does not allow for the explicit formation of either the direct or inverse Hessian and is incompatible with non-box constraints. An alternative approach is thecompact representation, which involves a low-rank representation for the direct and/or inverse Hessian.[6]This represents the Hessian as a sum of a diagonal matrix and a low-rank update. Such a representation enables the use of L-BFGS in constrained settings, for example, as part of the SQP method.
L-BFGS has been called "the algorithm of choice" for fittinglog-linear (MaxEnt) modelsandconditional random fieldswithℓ2{\displaystyle \ell _{2}}-regularization.[2][3]
Since BFGS (and hence L-BFGS) is designed to minimizesmoothfunctions withoutconstraints, the L-BFGS algorithm must be modified to handle functions that include non-differentiablecomponents or constraints. A popular class of modifications are called active-set methods, based on the concept of theactive set. The idea is that when restricted to a small neighborhood of the current iterate, the function and constraints can be simplified.
TheL-BFGS-Balgorithm extends L-BFGS to handle simple box constraints (aka bound constraints) on variables; that is, constraints of the formli≤xi≤uiwherelianduiare per-variable constant lower and upper bounds, respectively (for eachxi, either or both bounds may be omitted).[7][8]The method works by identifying fixed and free variables at every step (using a simple gradient method), and then using the L-BFGS method on the free variables only to get higher accuracy, and then repeating the process.
Orthant-wise limited-memory quasi-Newton(OWL-QN) is an L-BFGS variant for fittingℓ1{\displaystyle \ell _{1}}-regularizedmodels, exploiting the inherentsparsityof such models.[3]It minimizes functions of the form
whereg{\displaystyle g}is adifferentiableconvexloss function. The method is an active-set type method: at each iterate, it estimates thesignof each component of the variable, and restricts the subsequent step to have the same sign. Once the sign is fixed, the non-differentiable‖x→‖1{\displaystyle \|{\vec {x}}\|_{1}}term becomes a smooth linear term which can be handled by L-BFGS. After an L-BFGS step, the method allows some variables to change sign, and repeats the process.
Schraudolphet al.present anonlineapproximation to both BFGS and L-BFGS.[9]Similar tostochastic gradient descent, this can be used to reduce the computational complexity by evaluating the error function and gradient on a randomly drawn subset of the overall dataset in each iteration. It has been shown that O-LBFGS has a global almost sure convergence[10]while the online approximation of BFGS (O-BFGS) is not necessarily convergent.[11]
Notable open source implementations include:
Notable non open source implementations include:
|
https://en.wikipedia.org/wiki/L-BFGS
|
FOAF(an acronym offriend of a friend) is amachine-readableontologydescribingpersons, their activities and their relations to other people and objects. Anyone can use FOAF to describe themselves. FOAF allows groups of people to describesocial networkswithout the need for a centralised database.
FOAF is a descriptive vocabulary expressed using theResource Description Framework(RDF) and theWeb Ontology Language(OWL). Computers may use these FOAF profiles to find, for example, all people living in Europe, or to list all people both you and a friend of yours know.[1][2]This is accomplished by defining relationships between people. Each profile has a unique identifier (such as the person'se-mail addresses, internationaltelephone number,Facebookaccount name, aJabber ID, or aURIof the homepage or weblog of the person), which is used when defining these relationships.
The FOAF project, which defines and extends the vocabulary of a FOAF profile, was started in 2000 by Libby Miller and Dan Brickley. It can be considered the firstSocial Semantic Webapplication,[citation needed]in that it combinesRDFtechnology with 'social web' concerns.[clarification needed]
Tim Berners-Lee, in a 2007 essay,[3]redefined thesemantic webconcept into theGiant Global Graph(GGG), where relationships transcend networks and documents. He considers the GGG to be on equal ground with theInternetand theWorld Wide Web, stating that "I express my network in a FOAF file, and that is a start of the revolution."
FOAF is one of the key components of theWebIDspecifications, in particular for the WebID+TLS protocol, which was formerly known as FOAF+SSL.
Although it is a relatively simple use-case and standard, FOAF has had limited adoption on the web. For example, theLive JournalandDeadJournalblogging sites support FOAF profiles for all their members,[4]My Operacommunity supported FOAF profiles for members as well as groups. FOAF support is present onIdenti.ca,FriendFeed,WordPressandTypePadservices.[5]
Yandexblog search platform supports search over FOAF profile information.[6]Prominent client-side FOAF support was available inSafari[7]web browser before RSS support was removed in Safari 6 and in the Semantic Radar[8]plugin forFirefoxbrowser.Semantic MediaWiki, thesemantic annotationandlinked dataextension ofMediaWikisupports mapping properties to external ontologies, including FOAF which is enabled by default.
There are also modules or plugins to support FOAF profiles or FOAF+SSL authorization for programming languages,[9][10]as well as forcontent management systems.[11]
The following FOAF profile (written inTurtleformat) states that James Wales is the name of the person described here. His e-mail address, homepage and depiction areweb resources, which means that each can be described using RDF as well. He has Wikimedia as an interest, and knows Angela Beesley (which is the name of a 'Person' resource).
Paddington Edition
|
https://en.wikipedia.org/wiki/FOAF_(software)
|
Incryptographyandcomputer science, ahash treeorMerkle treeis atreein which every "leaf"nodeis labelled with thecryptographic hashof a data block, and every node that is not a leaf (called abranch,inner node, orinode) is labelled with the cryptographic hash of the labels of its child nodes. A hash tree allows efficient and secure verification of the contents of a largedata structure. A hash tree is a generalization of ahash listand ahash chain.
Demonstrating that a leaf node is a part of a given binary hash tree requires computing a number of hashes proportional to thelogarithmof the number of leaf nodes in the tree.[1]Conversely, in a hash list, the number is proportional to the number of leaf nodes itself. A Merkle tree is therefore an efficient example of acryptographic commitment scheme, in which the root of the tree is seen as a commitment and leaf nodes may be revealed and proven to be part of the original commitment.[2]
The concept of a hash tree is named afterRalph Merkle, who patented it in 1979.[3][4]
Hash trees can be used to verify any kind of data stored, handled and transferred in and between computers. They can help ensure that datablocksreceived from other peers in apeer-to-peer networkare received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks.
Hash trees are used in:
Suggestions have been made to use hash trees intrusted computingsystems.[11]
The initial Bitcoin implementation of Merkle trees bySatoshi Nakamotoapplies the compression step of the hash function to an excessive degree, which is mitigated by using Fast Merkle Trees.[12]
A hash tree is atreeofhashesin which the leaves (i.e., leaf nodes, sometimes also called "leafs") are hashes of datablocksin, for instance, a file or set of files. Nodes farther up in the tree are the hashes of their respective children. For example, in the above picturehash 0is the result of hashing theconcatenationofhash 0-0andhash 0-1. That is,hash 0=hash(hash 0-0+hash 0-1) where "+" denotes concatenation.
Most hash tree implementations are binary (two child nodes under each node) but they can just as well use many more child nodes under each node.
Usually, acryptographic hash functionsuch asSHA-2is used for the hashing. If the hash tree only needs to protect against unintentional damage, unsecuredchecksumssuch asCRCscan be used.
In the top of a hash tree there is atop hash(orroot hashormaster hash). Before downloading a file on aP2P network, in most cases the top hash is acquired from a trusted source, for instance a friend or a web site that is known to have good recommendations of files to download. When the top hash is available, the hash tree can be received from any non-trusted source, like any peer in the P2P network. Then, the received hash tree is checked against the trusted top hash, and if the hash tree is damaged or fake, another hash tree from another source will be tried until the program finds one that matches the top hash.[13]
The main difference from ahash listis that one branch of the hash tree can be downloaded at a time and the integrity of each branch can be checked immediately, even though the whole tree is not available yet. For example, in the picture, the integrity ofdata block L2can be verified immediately if the tree already containshash 0-0andhash 1by hashing the data block and iteratively combining the result withhash 0-0and thenhash 1and finally comparing the result with thetop hash. Similarly, the integrity ofdata block L3can be verified if the tree already hashash 1-1andhash 0. This can be an advantage since it is efficient to split files up in very small data blocks so that only small blocks have to be re-downloaded if they get damaged. If the hashed file is big, such a hash list or hash chain becomes fairly big. But if it is a tree, one small branch can be downloaded quickly, the integrity of the branch can be checked, and then the downloading of data blocks can start.[citation needed]
The Merkle hash root does not indicate the tree depth, enabling asecond-preimage attackin which an attacker creates a document other than the original that has the same Merkle hash root. For the example above, an attacker can create a new document containing two data blocks, where the first ishash 0-0+hash 0-1, and the second ishash 1-0+hash 1-1.[14][15]
One simple fix is defined inCertificate Transparency: when computing leaf node hashes, a 0x00 byte is prepended to the hash data, while 0x01 is prepended when computing internal node hashes.[13]Limiting the hash tree size is a prerequisite of someformal security proofs, and helps in making some proofs tighter. Some implementations limit the tree depth using hash tree depth prefixes before hashes, so any extracted hash chain is defined to be valid only if the prefix decreases at each step and is still positive when the leaf is reached.
The Tiger tree hash is a widely used form of hash tree. It uses a binary hash tree (two child nodes under each node), usually has a data block size of 1024bytesand uses theTiger hash.[16]
Tiger tree hashes are used inGnutella,[17]Gnutella2, andDirect ConnectP2Pfile sharing protocols[18]and infile sharingapplications such asPhex,[19]BearShare,LimeWire,Shareaza,DC++[20]andgtk-gnutella.[21]
|
https://en.wikipedia.org/wiki/Merkle_tree
|
Inanalytic philosophyandcomputer science,referential transparencyandreferential opacityare properties of linguistic constructions,[a]and by extension of languages. A linguistic construction is calledreferentially transparentwhen for any expression built from it,replacinga subexpression with another one thatdenotesthe same value[b]does not change the value of the expression.[1][2]Otherwise, it is calledreferentially opaque. Each expression built from a referentially opaque linguistic construction states something about a subexpression, whereas each expression built from a referentially transparent linguistic construction states something not about a subexpression, meaning that the subexpressions are ‘transparent’ to the expression, acting merely as ‘references’ to something else.[3]For example, the linguistic construction ‘_ was wise’ is referentially transparent (e.g.,Socrates was wiseis equivalent toThe founder of Western philosophy was wise) but ‘_ said _’ is referentially opaque (e.g.,Xenophon said ‘Socrates was wise’is not equivalent toXenophon said ‘The founder of Western philosophy was wise’).
Referential transparency, in programming languages, depends on semantic equivalences among denotations of expressions, or oncontextual equivalenceof expressions themselves. That is, referential transparency depends on the semantics of the language. So, bothdeclarative languagesandimperative languagescan have referentially transparent positions, referentially opaque positions, or (usually) both, according to the semantics they are given.
The importance of referentially transparent positions is that they allow theprogrammerand thecompilerto reason about program behavior as arewrite systemat those positions. This can help in provingcorrectness, simplifying analgorithm, assisting in modifying code without breaking it, oroptimizingcode by means ofmemoization,common subexpression elimination,lazy evaluation, orparallelization.
The concept originated inAlfred North WhiteheadandBertrand Russell'sPrincipia Mathematica(1910–1913):[3]
A proposition as the vehicle of truth or falsehood is a particular occurrence, while a proposition considered factually is a class of similar occurrences. It is the proposition considered factually that occurs in such statements as “Abelievesp“ and “pis aboutA.”
Of course it is possible to make statements about the particular fact “Socrates is Greek.” We may say how many centimetres long it is; we may say it is black; and so on. But these are not the statements that a philosopher or logician is tempted to make.
When an assertion occurs, it is made by means of a particular fact, which is an instance of the proposition asserted. But this particular fact is, so to speak, “transparent”; nothing is said about it, but by means of it something is said about something else. It is this “transparent” quality that belongs to propositions as they occur in truth-functions. This belongs topwhenpis asserted, but not when we say “pis true.”
It was adopted in analytic philosophy inWillard Van Orman Quine'sWord and Object(1960):[1]
When a singular term is used in a sentence purely to specify its object, and the sentence is true of the object, then certainly the sentence will stay true when any other singular term is substituted that designates the same object. Here we have a criterion for what may be calledpurely referential position: the position must be subject to thesubstitutivity of identity.
[…]
Referential transparency has to do with constructions (§ 11); modes of containment, more specifically, of singular terms or sentences in singular terms or sentences. I call a mode of containmentφreferentially transparent if, whenever an occurrence of a singular termtis purely referential in a term or sentenceψ(t), it is purely referential also in the containing term or sentenceφ(ψ(t)).
The term appeared in its contemporary computer science usage in the discussion ofvariablesinprogramming languagesinChristopher Strachey's seminal set of lecture notesFundamental Concepts in Programming Languages(1967):[2]
One of the most useful properties of expressions is that called by Quine [4]referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression, the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression.
There are three fundamental properties concerning substitutivity in formal languages: referential transparency, definiteness, and unfoldability.[4]
Let’s denote syntactic equivalence with ≡ and semantic equivalence with =.
Apositionis defined by a sequence of natural numbers. The empty sequence is denoted by ε and the sequence constructor by ‘.’.
Example.— Position 2.1 in the expression(+ (∗e1e1) (∗e2e2))is the place occupied by the first occurrence ofe2.
Expressionewithexpressione′inserted atpositionpis denoted bye[e′/p]and defined by
Example.— Ife≡ (+ (∗e1e1) (∗e2e2))thene[e3/2.1] ≡ (+ (∗e1e1) (∗e3e2)).
Positionpispurely referentialin expressioneis defined by
In other words, a position is purely referential in an expression if and only if it is subject to the substitutivity of equals.εis purely referential in all expressions.
OperatorΩisreferentially transparentin placeiis defined by
OtherwiseΩisreferentially opaquein placei.
An operator isreferentially transparentis defined by it is referentially transparent in all places. Otherwise it isreferentially opaque.
A formal language isreferentially transparentis defined by all its operators are referentially transparent. Otherwise it isreferentially opaque.
Example.— The ‘_ lives in _’ operator is referentially transparent:
Indeed, the second position is purely referential in the assertion because substitutingThe capital of the United KingdomforLondondoes not change the value of the assertion. The first position is also purely referential for the same substitutivity reason.
Example.— The ‘_ contains _’ and quote operators are referentially opaque:
Indeed, the first position is not purely referential in the statement because substitutingThe capital of the United KingdomforLondonchanges the value of the statement and the quotation. So in the first position, the ‘_ contains _’ and quote operators destroy the relation between an expression and the value that it denotes.
Example.— The ‘_ refers to _’ operator is referentially transparent, despite the referential opacity of the quote operator:
Indeed, the first position is purely referential in the statement, though it is not in the quotation, because substitutingThe capital of the United KingdomforLondondoes not change the value of the statement. So in the first position, the ‘_ refers to _’ operator restores the relation between an expression and the value that it denotes. The second position is also purely referential for the same substitutivity reason.
A formal language isdefiniteis defined by all the occurrences of a variable within its scope denote the same value.
Example.— Mathematics is definite:
Indeed, the two occurrences ofxdenote the same value.
A formal language isunfoldableis defined by all expressions areβ-reducible.
Example.— Thelambda calculusis unfoldable:
Indeed,((λx.x+ 1) 3) = (x+ 1)[3/x].
Referential transparency, definiteness, and unfoldability are independent.
Definiteness implies unfoldability only for deterministic languages.
Non-deterministic languages cannot have definiteness and unfoldability at the same time.
|
https://en.wikipedia.org/wiki/Referential_transparency
|
Bayesian statistics(/ˈbeɪziən/BAY-zee-ənor/ˈbeɪʒən/BAY-zhən)[1]is a theory in the field ofstatisticsbased on theBayesian interpretation of probability, whereprobabilityexpresses adegree of beliefin anevent. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of otherinterpretations of probability, such as thefrequentistinterpretation, which views probability as thelimitof the relative frequency of an event after many trials.[2]More concretely, analysis in Bayesian methods codifies prior knowledge in the form of aprior distribution.
Bayesian statistical methods useBayes' theoremto compute and update probabilities after obtaining new data. Bayes' theorem describes theconditional probabilityof an event based on data as well as prior information or beliefs about the event or conditions related to the event.[3][4]For example, inBayesian inference, Bayes' theorem can be used to estimate the parameters of aprobability distributionorstatistical model. Since Bayesian statistics treats probability as a degree of belief, Bayes' theorem can directly assign a probability distribution that quantifies the belief to the parameter or set of parameters.[2][3]
Bayesian statistics is named afterThomas Bayes, who formulated a specific case of Bayes' theorem ina paperpublished in 1763. In several papers spanning from the late 18th to the early 19th centuries,Pierre-Simon Laplacedeveloped the Bayesian interpretation of probability.[5]Laplace used methods now considered Bayesian to solve a number of statistical problems. While many Bayesian methods were developed by later authors, the term "Bayesian" was not commonly used to describe these methods until the 1950s. Throughout much of the 20th century, Bayesian methods were viewed unfavorably by many statisticians due to philosophical and practical considerations. Many of these methods required much computation, and most widely used approaches during that time were based on the frequentist interpretation. However, with the advent of powerful computers and newalgorithmslikeMarkov chain Monte Carlo, Bayesian methods have gained increasing prominence in statistics in the 21st century.[2][6]
Bayes's theorem is used in Bayesian methods to update probabilities, which are degrees of belief, after obtaining new data. Given two eventsA{\displaystyle A}andB{\displaystyle B}, the conditional probability ofA{\displaystyle A}given thatB{\displaystyle B}is true is expressed as follows:[7]
P(A∣B)=P(B∣A)P(A)P(B){\displaystyle P(A\mid B)={\frac {P(B\mid A)P(A)}{P(B)}}}
whereP(B)≠0{\displaystyle P(B)\neq 0}. Although Bayes's theorem is a fundamental result ofprobability theory, it has a specific interpretation in Bayesian statistics. In the above equation,A{\displaystyle A}usually represents aproposition(such as the statement that a coin lands on heads fifty percent of the time) andB{\displaystyle B}represents the evidence, or new data that is to be taken into account (such as the result of a series of coin flips).P(A){\displaystyle P(A)}is theprior probabilityofA{\displaystyle A}which expresses one's beliefs aboutA{\displaystyle A}before evidence is taken into account. The prior probability may also quantify prior knowledge or information aboutA{\displaystyle A}.P(B∣A){\displaystyle P(B\mid A)}is thelikelihood function, which can be interpreted as the probability of the evidenceB{\displaystyle B}given thatA{\displaystyle A}is true. The likelihood quantifies the extent to which the evidenceB{\displaystyle B}supports the propositionA{\displaystyle A}.P(A∣B){\displaystyle P(A\mid B)}is theposterior probability, the probability of the propositionA{\displaystyle A}after taking the evidenceB{\displaystyle B}into account. Essentially, Bayes's theorem updates one's prior beliefsP(A){\displaystyle P(A)}after considering the new evidenceB{\displaystyle B}.[2]
The probability of the evidenceP(B){\displaystyle P(B)}can be calculated using thelaw of total probability. If{A1,A2,…,An}{\displaystyle \{A_{1},A_{2},\dots ,A_{n}\}}is apartitionof thesample space, which is the set of alloutcomesof an experiment, then,[2][7]
P(B)=P(B∣A1)P(A1)+P(B∣A2)P(A2)+⋯+P(B∣An)P(An)=∑iP(B∣Ai)P(Ai){\displaystyle P(B)=P(B\mid A_{1})P(A_{1})+P(B\mid A_{2})P(A_{2})+\dots +P(B\mid A_{n})P(A_{n})=\sum _{i}P(B\mid A_{i})P(A_{i})}
When there are an infinite number of outcomes, it is necessary tointegrateover all outcomes to calculateP(B){\displaystyle P(B)}using the law of total probability. Often,P(B){\displaystyle P(B)}is difficult to calculate as the calculation would involve sums or integrals that would be time-consuming to evaluate, so often only the product of the prior and likelihood is considered, since the evidence does not change in the same analysis. The posterior is proportional to this product:[2]
P(A∣B)∝P(B∣A)P(A){\displaystyle P(A\mid B)\propto P(B\mid A)P(A)}
Themaximum a posteriori, which is themodeof the posterior and is often computed in Bayesian statistics usingmathematical optimizationmethods, remains the same. The posterior can be approximated even without computing the exact value ofP(B){\displaystyle P(B)}with methods such asMarkov chain Monte Carloorvariational Bayesian methods.[2]
The general set of statistical techniques can be divided into a number of activities, many of which have special Bayesian versions.
Bayesian inference refers tostatistical inferencewhere uncertainty in inferences is quantified using probability.[8]In classicalfrequentist inference, modelparametersand hypotheses are considered to be fixed. Probabilities are not assigned to parameters or hypotheses in frequentist inference. For example, it would not make sense in frequentist inference to directly assign a probability to an event that can only happen once, such as the result of the next flip of a fair coin. However, it would make sense to state that the proportion of headsapproaches one-halfas the number of coin flips increases.[9]
Statistical modelsspecify a set of statistical assumptions and processes that represent how the sample data are generated. Statistical models have a number of parameters that can be modified. For example, a coin can be represented as samples from aBernoulli distribution, which models two possible outcomes. The Bernoulli distribution has a single parameter equal to the probability of one outcome, which in most cases is the probability of landing on heads. Devising a good model for the data is central in Bayesian inference. In most cases, models only approximate the true process, and may not take into account certain factors influencing the data.[2]In Bayesian inference, probabilities can be assigned to model parameters. Parameters can be represented asrandom variables. Bayesian inference uses Bayes' theorem to update probabilities after more evidence is obtained or known.[2][10]Furthermore, Bayesian methods allow for placing priors on entire models and calculating their posterior probabilities using Bayes' theorem. These posterior probabilities
are proportional to the product of the prior and the marginal likelihood,
where the marginal likelihood is the integral of the sampling density over the prior distribution
of the parameters. In complex models, marginal likelihoods are
generally computed numerically.[11]
The formulation ofstatistical modelsusing Bayesian statistics has the identifying feature of requiring the specification ofprior distributionsfor any unknown parameters. Indeed, parameters of prior distributions may themselves have prior distributions, leading toBayesian hierarchical modeling,[12][13][14]also known as multi-level modeling. A special case isBayesian networks.
For conducting a Bayesian statistical analysis, best practices are discussed by van de Schoot et al.[15]
For reporting the results of a Bayesian statistical analysis, Bayesian analysis reporting guidelines (BARG) are provided in an open-access article byJohn K. Kruschke.[16]
TheBayesian design of experimentsincludes a concept called 'influence of prior beliefs'. This approach usessequential analysistechniques to include the outcome of earlier experiments in the design of the next experiment. This is achieved by updating 'beliefs' through the use of prior andposterior distribution. This allows the design of experiments to make good use of resources of all types. An example of this is themulti-armed bandit problem.
Exploratory analysis of Bayesian models is an adaptation or extension of theexploratory data analysisapproach to the needs and peculiarities of Bayesian modeling. In the words of Persi Diaconis:[17]
Exploratory data analysis seeks to reveal structure, or simple descriptions in data. We look at numbers or graphs and try to find patterns. We pursue leads suggested by background information, imagination, patterns perceived, and experience with other data analyses
Theinference processgenerates a posterior distribution, which has a central role in Bayesian statistics, together with other distributions like the posterior predictive distribution and the prior predictive distribution. The correct visualization, analysis, and interpretation of these distributions is key to properly answer the questions that motivate the inference process.[18]
When working with Bayesian models there are a series of related tasks that need to be addressed besides inference itself:
All these tasks are part of the Exploratory analysis of Bayesian models approach and successfully performing them is central to the iterative and interactive modeling process. These tasks require both numerical and visual summaries.[19][20][21]
|
https://en.wikipedia.org/wiki/Bayesian_statistics
|
Inlinguistics, agraphemeis the smallest functional unit of awriting system.[1]The wordgraphemeis derived fromAncient Greekgráphō('write'), and the suffix-emeby analogy withphonemeand otheremic units. The study of graphemes is calledgraphemics. The concept of graphemes is abstract and similar to the notion incomputingof acharacter. (A specific geometric shape that represents any particular grapheme in a giventypefaceis called aglyph.)
There are two main opposing grapheme concepts.[2]
In the so-calledreferential conception, graphemes are interpreted as the smallest units of writing that correspond with sounds (more accuratelyphonemes). In this concept, theshin the written English wordshakewould be a grapheme because it represents the phoneme/ʃ/. This referential concept is linked to thedependency hypothesisthat claims that writing merely depicts speech.
By contrast, theanalogical conceptdefines graphemes analogously to phonemes, i.e. via writtenminimal pairssuch asshakevs.snake. In this example,handnare graphemes because they distinguish two words. This analogical concept is associated with the autonomy hypothesis which holds that writing is a system in its own right and should be studied independently from speech. Both concepts have weaknesses.[3]
Some models adhere to both concepts simultaneously by including two individual units,[4]which are given names such asgraphemic graphemefor the grapheme according to the analogical conception (hinshake), andphonological-fit graphemefor the grapheme according to the referential concept (shinshake).[5]
In newer concepts, in which the grapheme is interpretedsemioticallyas a dyadiclinguistic sign,[6]it is defined as a minimal unit of writing that is both lexically distinctive and corresponds with a linguistic unit (phoneme,syllable, ormorpheme).[7]
Graphemes are often notated withinangle brackets: e.g.⟨a⟩.[8]This is analogous to the slash notation/a/used forphonemes. Analogous to thesquare bracketnotation[a]used forphones,glyphsare sometimes denoted with vertical lines, e.g.|ɑ|.[9]
In the same way that thesurface formsofphonemesare speech sounds orphones(and different phones representing the same phoneme are calledallophones), the surface forms of graphemes areglyphs(sometimesgraphs), namely concrete written representations of symbols (and different glyphs representing the same grapheme are calledallographs).
Thus, a grapheme can be regarded as anabstractionof a collection of glyphs that are all functionally equivalent.
For example, in written English (or other languages using theLatin alphabet), there are two different physical representations of thelowercaseLatin letter "a": "a" and "ɑ". Since, however, the substitution of either of them for the other cannot change the meaning of a word, they are considered to be allographs of the same grapheme, which can be written⟨a⟩. Similarly, the grapheme corresponding to "Arabic numeral zero" has a unique semantic identity and Unicode valueU+0030but exhibits variation in the form ofslashed zero. Italic and bold face forms are also allographic, as is the variation seen inserif(as inTimes New Roman) versussans-serif(as inHelvetica) forms.
There is some disagreement as to whether capital and lower case letters are allographs or distinct graphemes. Capitals are generally found in certain triggering contexts that do not change the meaning of a word: a proper name, for example, or at the beginning of a sentence, or all caps in a newspaper headline. In other contexts, capitalization can determine meaning: compare, for examplePolishandpolish: the former is a language, the latter is for shining shoes.
Some linguists considerdigraphslike the⟨sh⟩inshipto be distinct graphemes, but these are generally analyzed as sequences of graphemes. Non-stylisticligatures, however, such as⟨æ⟩, are distinct graphemes, as are various letters with distinctivediacritics, such as⟨ç⟩.
Identical glyphs may not always represent the same grapheme. For example, the three letters⟨A⟩,⟨А⟩and⟨Α⟩appear identical but each has a different meaning: in order, they are the Latin letterA, the Cyrillic letterAzǔ/Азъand the Greek letterAlpha. Each has its owncode pointin Unicode:U+0041ALATIN CAPITAL LETTER A,U+0410АCYRILLIC CAPITAL LETTER AandU+0391ΑGREEK CAPITAL LETTER ALPHA.
The principal types of graphemes arelogograms(more accurately termed morphograms[10]), which represent words ormorphemes(for exampleChinese characters, theampersand"&" representing the wordand,Arabic numerals);syllabiccharacters, representingsyllables(as in Japanesekana); andalphabeticletters, corresponding roughly tophonemes(see next section). For a full discussion of the different types, seeWriting system § Functional classification.
There are additional graphemic components used in writing, such aspunctuation marks,mathematical symbols,word dividerssuch as the space, and othertypographic symbols. Ancientlogographic scriptsoften used silentdeterminativesto disambiguate the meaning of a neighboring (non-silent) word.
As mentioned in the previous section, in languages that usealphabeticwriting systems, many of the graphemes stand in principle for thephonemes(significant sounds) of the language. In practice, however, theorthographiesof such languages entail at least a certain amount of deviation from the ideal of exact grapheme–phoneme correspondence. A phoneme may be represented by amultigraph(sequence of more than one grapheme), as thedigraphshrepresents a single sound in English (and sometimes a single grapheme may represent more than one phoneme, as with the Russian letterяor the Spanish c). Some graphemes may not represent any sound at all (like thebin Englishdebtor thehin all Spanish words containing the said letter), and often the rules of correspondence between graphemes and phonemes become complex or irregular, particularly as a result of historicalsound changesthat are not necessarily reflected in spelling. "Shallow" orthographies such as those of standardSpanishandFinnishhave relatively regular (though not always one-to-one) correspondence between graphemes and phonemes, while those of French and English have much less regular correspondence, and are known asdeep orthographies.
Multigraphs representing a single phoneme are normally treated as combinations of separate letters, not as graphemes in their own right. However, in some languages a multigraph may be treated as a single unit for the purposes ofcollation; for example, in aCzechdictionary, the section for words that start with⟨ch⟩comes after that for⟨h⟩.[11]For more examples, seeAlphabetical order § Language-specific conventions.
|
https://en.wikipedia.org/wiki/Grapheme
|
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour
Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks
Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics
Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication
Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety
Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation
Rational choice theoryBounded rationality
Inphilosophy,systems theory,science, andart,emergenceoccurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole.
Emergence plays a central role in theories ofintegrative levelsand ofcomplex systems. For instance, the phenomenon oflifeas studied inbiologyis an emergent property ofchemistryandphysics.
In philosophy, theories that emphasize emergent properties have been calledemergentism.[1]
Philosophers often understand emergence as a claim about theetiologyof asystem's properties. An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole.Nicolai Hartmann(1882–1950), one of the first modern philosophers to write on emergence, termed this acategorial novum(new category).[2]
This concept of emergence dates from at least the time ofAristotle.[3]Many scientists and philosophers[4]have written on the concept, includingJohn Stuart Mill(Composition of Causes, 1843)[5]andJulian Huxley[6](1887–1975).
The philosopherG. H. Lewescoined the term "emergent" in 1875, distinguishing it from the merely "resultant":
Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same – their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these arehomogeneousandcommensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference.[7][8]
Usage of the notion "emergence" may generally be subdivided into two perspectives, that of "weak emergence" and "strong emergence". One paper discussing this division isWeak Emergence, by philosopherMark Bedau. In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation or similar forms of after-the-fact analysis (for example, the formation of a traffic jam, the structure of a flock of starlings in flight or a school of fish, or the formation of galaxies). Crucial in these simulations is that the interacting members retain their independence. If not, a new entity is formed with new, emergent properties: this is called strong emergence, which it is argued cannot be simulated, analysed or reduced.[9]
David Chalmerswrites that emergence often causes confusion in philosophy and science due to a failure to demarcate strong and weak emergence, which are "quite different concepts".[10]
Some common points between the two notions are that emergence concerns new properties produced as the system grows, which is to say ones which are not shared with its components or prior states. Also, it is assumed that the properties aresupervenientrather than metaphysically primitive.[9]
Weak emergence describes new properties arising in systems as a result of the interactions at a fundamental level. However, Bedau stipulates that the properties can be determined only by observing or simulating the system, and not by any process of areductionistanalysis. As a consequence the emerging properties arescale dependent: they are only observable if the system is large enough to exhibit the phenomenon. Chaotic, unpredictable behaviour can be seen as an emergent phenomenon, while at a microscopic scale the behaviour of the constituent parts can be fullydeterministic.[citation needed]
Bedaunotes that weak emergence is not a universal metaphysical solvent, as the hypothesis thatconsciousnessis weakly emergent would not resolve the traditionalphilosophical questionsabout the physicality of consciousness. However, Bedau concludes that adopting this view would provide a precise notion that emergence is involved in consciousness, and second, the notion of weak emergence is metaphysically benign.[9]
Strong emergence describes the direct causal action of a high-level system on its components; qualities produced this way areirreducibleto the system's constituent parts.[11]The whole is other than the sum of its parts. It is argued then that no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts.[9]Physics lacks well-established examples of strong emergence, unless it is interpreted as the impossibilityin practiceto explain the whole in terms of the parts. Practical impossibility may be a more useful distinction than one in principle, since it is easier to determine and quantify, and does not imply the use of mysterious forces, but simply reflects the limits of our capability.[12]
One of the reasons for the importance of distinguishing these two concepts with respect to their difference concerns the relationship of purported emergent properties to science. Some thinkers question the plausibility of strong emergence as contravening our usual understanding of physics. Mark A. Bedau observes:
Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.[9]
The concern that strong emergence does so entail is that such a consequence must be incompatible with metaphysical principles such as theprinciple of sufficient reasonor the Latin dictumex nihilo nihil fit, often translated as "nothing comes from nothing".[13]
Strong emergence can be criticized for leading to causaloverdetermination. The canonical example concerns emergent mental states (M and M∗) that supervene on physical states (P and P∗) respectively. Let M and M∗ be emergent properties. Let M∗ supervene on base property P∗. What happens when M causes M∗?Jaegwon Kimsays:
In our schematic example above, we concluded that M causes M∗ by causing P∗. So M causes P∗. Now, M, as an emergent, must itself have an emergence base property, say P. Now we face a critical question: if an emergent, M, emerges from basal condition P, why cannot P displace M as a cause of any putative effect of M? Why cannot P do all the work in explaining why any alleged effect of M occurred? If causation is understood asnomological(law-based) sufficiency, P, as M's emergence base, is nomologically sufficient for it, and M, as P∗'s cause, is nomologically sufficient for P∗. It follows that P is nomologically sufficient for P∗ and hence qualifies as its cause...If M is somehow retained as a cause, we are faced with the highly implausible consequence that every case of downward causation involves overdetermination (since P remains a cause of P∗ as well). Moreover, this goes against the spirit of emergentism in any case: emergents are supposed to make distinctive and novel causal contributions.[14]
If M is the cause of M∗, then M∗ is overdetermined because M∗ can also be thought of as being determined by P. One escape-route that a strong emergentist could take would be to denydownward causation. However, this would remove the proposed reason that emergent mental states must supervene on physical states, which in turn would callphysicalisminto question, and thus be unpalatable for some philosophers and physicists.
Carroll and Parola propose a taxonomy that classifies emergent phenomena by how the macro-description relates to the underlying micro-dynamics.[15]
Crutchfield regards the properties of complexity and organization of any system assubjectivequalitiesdetermined by the observer.
Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analysed in terms of how model-building observers infer from measurements the computational capabilities embedded in non-linear processes. An observer's notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtly, though, on how those resources are organized. The descriptive power of the observer's chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data.[16]
The lowentropyof an ordered system can be viewed as an example of subjective emergence: the observer sees an ordered system by ignoring the underlying microstructure (i.e. movement of molecules or elementary particles) and concludes that the system has a low entropy.[17]On the other hand, chaotic, unpredictable behaviour can also be seen as subjective emergent, while at a microscopic scale the movement of the constituent parts can be fully deterministic.
Inphysics, emergence is used to describe a property, law, or phenomenon which occurs at macroscopic scales (in space or time) but not at microscopic scales, despite the fact that a macroscopic system can be viewed as a very large ensemble of microscopic systems.[18][19]
An emergent behavior of a physical system is a qualitative property that can only occur in the limit that the number of microscopic constituents tends to infinity.[20]
According toRobert Laughlin,[11]for many-particle systems, nothing can be calculated exactly from the microscopic equations, and macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details.
NovelistArthur Koestlerused the metaphor ofJanus(a symbol of the unity underlying complements like open/shut, peace/war) to illustrate how the two perspectives (strong vs. weak orholisticvs.reductionistic) should be treated as non-exclusive, and should work together to address the issues of emergence.[21]Theoretical physicistPhilip W. Andersonstates it this way:
The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts.[22]
Meanwhile, others have worked towards developing analytical evidence of strong emergence.Renormalizationmethods in theoretical physics enable physicists to study critical phenomena that are not tractable as the combination of their parts.[23]In 2009, Guet al.presented a class of infinite physical systems that exhibits non-computable macroscopic properties.[24][25]More precisely, if one could compute certain macroscopic properties of these systems from the microscopic description of these systems, then one would be able to solve computational problems known to be undecidable in computer science. These results concern infinite systems, finite systems being considered computable. However, macroscopic concepts which only apply in the limit of infinite systems, such asphase transitionsand therenormalization group, are important for understanding and modeling real, finite physical systems. Guet al.concluded that
Although macroscopic concepts are essential for understanding our world, much of fundamental physics has been devoted to the search for a 'theory of everything', a set of equations that perfectly describe the behavior of all fundamental particles. The view that this is the goal of science rests in part on the rationale that such a theory would allow us to derive the behavior of all macroscopic concepts, at least in principle. The evidence we have presented suggests that this view may be overly optimistic. A 'theory of everything' is one of many components necessary for complete understanding of the universe, but is not necessarily the only one. The development of macroscopic laws from first principles may involve more than just systematic logic, and could require conjectures suggested by experiments, simulations or insight.[24]
Human beings are the basic elements of social systems, which perpetually interact and create, maintain, or untangle mutual social bonds. Social bonds in social systems are perpetually changing in the sense of the ongoing reconfiguration of their structure.[26]An early argument (1904–05) for the emergence of social formations can be found inMax Weber's most famous work,The Protestant Ethic and the Spirit of Capitalism.[27]Recently, the emergence of a new social system is linked with the emergence of order from nonlinear relationships among multiple interacting units, where multiple interacting units are individual thoughts, consciousness, and actions.[28]In the case of the global economic system, undercapitalism, growth, accumulation and innovation can be considered emergent processes where not only does technological processes sustain growth, but growth becomes the source of further innovations in a recursive, self-expanding spiral. In this sense, the exponential trend of the growth curve reveals the presence of a long-term positivefeedbackamong growth, accumulation, and innovation; and the emergence of new structures and institutions connected to the multi-scale process of growth.[29]This is reflected in the work ofKarl Polanyi, who traces the process by which labor and nature are converted into commodities in the passage from an economic system based on agriculture to one based on industry.[30]This shift, along with the idea of the self-regulating market, set the stage not only for another economy but also for another society. The principle of emergence is also brought forth when thinking about alternatives to the current economic system based on growth facing social andecologicallimits. Bothdegrowthand socialecological economicshave argued in favor of a co-evolutionary perspective for theorizing about transformations that overcome the dependence of human wellbeing oneconomic growth.[31][32]
Economic trends and patterns which emerge are studied intensively by economists.[33]Within the field of group facilitation and organization development, there have been a number of new group processes that are designed to maximize emergence and self-organization, by offering a minimal set of effective initial conditions. Examples of these processes includeSEED-SCALE,appreciative inquiry, Future Search, the world cafe orknowledge cafe,Open Space Technology, and others (Holman, 2010[34]). In international development, concepts of emergence have been used within a theory of social change termedSEED-SCALEto show how standard principles interact to bring forward socio-economic development fitted to cultural values, community economics, and natural environment (local solutions emerging from the larger socio-econo-biosphere). These principles can be implemented utilizing a sequence of standardized tasks thatself-assemblein individually specific ways utilizing recursive evaluative criteria.[35]
Looking at emergence in the context of social andsystemschange, invites us to reframe our thinking on parts and wholes and their interrelation. Unlike machines,living systemsat all levels of recursion - be it a sentient body, a tree, a family, an organisation, the education system, the economy, the health system, the political system etc - are continuously creating themselves. They are continually growing and changing along with their surrounding elements, and therefore are more than the sum of their parts. As Peter Senge and co-authors put forward in the bookPresence: Exploring profound change in People, Organizations and Society, "as long as our thinking is governed by habit - notably industrial, "machine age" concepts such as control, predictability, standardization, and "faster is better" - we will continue to recreate institutions as they have been, despite their disharmony with the larger world, and the need for all living systems to evolve."[36]While change is predictably constant, it is unpredictable in direction and often occurs at second and nth orders of systemic relationality.[37]Understanding emergence and what creates the conditions for different forms of emergence to occur, either insidious or nourishing vitality, is essential in the search for deep transformations.
The works of Nora Bateson and her colleagues at the International Bateson Institute delve into this. Since 2012, they have been researching questions such aswhat makes a living system ready to change? Can unforeseen ready-ness for change be nourished?Here being ready is not thought of as being prepared, but rather as nourishing theflexibilitywe do not yet know will be needed. These inquiries challenge the common view that a theory of change is produced from an identified preferred goal or outcome. As explained in their paperAn essay on ready-ing: Tending the prelude to change:[37]"While linear managing or controlling of the direction of change may appear desirable, tending to how the system becomes ready allows for pathways of possibility previously unimagined." This brings a new lens to the field of emergence in social and systems change as it looks to tending the pre-emergent process. Warm Data Labs are the fruit of theirpraxis, they are spaces for transcontextual mutual learning in which aphanipoetic phenomena unfold.[38]Having hosted hundreds of Warm Data processes with 1000s of participants, they have found that these spaces of shared poly-learning across contexts lead to a realm of potential change, a necessarily obscured zone of wild interaction of unseen, unsaid, unknown flexibility.[37]It is such flexibility that nourishes the ready-ing living systems require to respond to complex situations in new ways and change. In other words, this readying process preludes what will emerge. When exploring questions of social change, it is important to ask ourselves, what is submerging in the current social imaginary and perhaps, rather than focus all our resources and energy on driving direct order responses, to nourish flexibility with ourselves, and the systems we are a part of.
Another approach that engages with the concept of emergence for social change is Theory U, where "deep emergence" is the result of self-transcending knowledge after a successful journey along the U through layers of awareness.[39]This practice nourishes transformation at the inner-being level, which enables new ways of being, seeing and relating to emerge. The concept of emergence has also been employed in the field offacilitation. InEmergent Strategy,adrienne maree browndefines emergent strategies as "ways for humans to practice complexity and grow the future through relatively simple interactions".[40]
Inlinguistics, the concept of emergence has been applied in the domain ofstylometryto explain the interrelation between the syntactical structures of the text and the author style (Slautina, Marusenko, 2014).[41]It has also been argued that the structure and regularity oflanguagegrammar, or at leastlanguage change, is an emergent phenomenon.[42]While each speaker merely tries to reach their own communicative goals, they use language in a particular way. If enough speakers behave in that way, language is changed.[43]In a wider sense, the norms of a language, i.e. the linguistic conventions of its speech society, can be seen as a system emerging from long-time participation in communicative problem-solving in various social circumstances.[44]
The bulk conductive response of binary (RC) electrical networks with random arrangements, known as theUniversal dielectric response(UDR), can be seen as emergent properties of such physical systems. Such arrangements can be used as simple physical prototypes for deriving mathematical formulae for the emergent responses of complex systems.[45]Internet traffic can also exhibit some seemingly emergent properties. In the congestion control mechanism,TCPflows can become globally synchronized at bottlenecks, simultaneously increasing and then decreasing throughput in coordination. Congestion, widely regarded as a nuisance, is possibly an emergent property of the spreading of bottlenecks across a network in high traffic flows which can be considered as aphase transition.[46]Some artificially intelligent (AI) computer applications simulate emergent behavior.[47]One example isBoids, which mimics theswarming behaviorof birds.[48]
In religion, emergence grounds expressions ofreligious naturalismandsyntheismin which a sense of thesacredis perceived in the workings of entirely naturalistic processes by which morecomplexforms arise or evolve from simpler forms. Examples are detailed inThe Sacred Emergence of NaturebyUrsula Goodenough&Terrence DeaconandBeyond Reductionism: Reinventing the SacredbyStuart Kauffman, both from 2006, as well asSyntheism – Creating God in The Internet AgebyAlexander Bard&Jan Söderqvistfrom 2014 andEmergentism: A Religion of Complexity for the Metamodern Worldby Brendan Graham Dempsey (2022).[citation needed]
Michael J. Pearcehas used emergence to describe the experience of works of art in relation to contemporary neuroscience.[49]Practicing artistLeonel Moura, in turn, attributes to his "artbots" a real, if nonetheless rudimentary, creativity based on emergent principles.[50]
|
https://en.wikipedia.org/wiki/Emergence
|
Inmathematics,Dirichlet convolution(ordivisor convolution) is abinary operationdefined forarithmetic functions; it is important innumber theory. It was developed byPeter Gustav Lejeune Dirichlet.
Iff,g:N→C{\displaystyle f,g:\mathbb {N} \to \mathbb {C} }are twoarithmetic functions, their Dirichlet convolutionf∗g{\displaystyle f*g}is a new arithmetic function defined by:
where the sum extends over all positivedivisorsd{\displaystyle d}ofn{\displaystyle n}, or equivalently over all distinct pairs(a,b){\displaystyle (a,b)}of positive integers whose product isn{\displaystyle n}.
This product occurs naturally in the study ofDirichlet seriessuch as theRiemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients:
The set of arithmetic functions forms acommutative ring, theDirichlet ring, with addition given bypointwise additionand multiplication by Dirichlet convolution. The multiplicative identity is theunit functionε{\displaystyle \varepsilon }defined byε(n)=1{\displaystyle \varepsilon (n)=1}ifn=1{\displaystyle n=1}and0{\displaystyle 0}otherwise. Theunits(invertible elements) of this ring are the arithmetic functionsf{\displaystyle f}withf(1)≠0{\displaystyle f(1)\neq 0}.
Specifically, Dirichlet convolution isassociative,[1]
distributiveover addition
commutative,
and has an identity element,
Furthermore, for each functionf{\displaystyle f}havingf(1)≠0{\displaystyle f(1)\neq 0}, there exists another arithmetic functionf−1{\displaystyle f^{-1}}satisfyingf∗f−1=ε{\displaystyle f*f^{-1}=\varepsilon }, called theDirichlet inverseoff{\displaystyle f}.
The Dirichlet convolution of twomultiplicative functionsis again multiplicative, and every not constantly zero multiplicative function has a Dirichlet inverse which is also multiplicative. In other words, multiplicative functions form a subgroup of the group of invertible elements of the Dirichlet ring. Beware however that the sum of two multiplicative functions is not multiplicative (since(f+g)(1)=f(1)+g(1)=2≠1{\displaystyle (f+g)(1)=f(1)+g(1)=2\neq 1}), so the subset of multiplicative functions is not a subring of the Dirichlet ring. The article on multiplicative functions lists several convolution relations among important multiplicative functions.
Another operation on arithmetic functions is pointwise multiplication:fg{\displaystyle fg}is defined by(fg)(n)=f(n)g(n){\displaystyle (fg)(n)=f(n)g(n)}. Given acompletely multiplicative functionh{\displaystyle h}, pointwise multiplication byh{\displaystyle h}distributes over Dirichlet convolution:(f∗g)h=(fh)∗(gh){\displaystyle (f*g)h=(fh)*(gh)}.[2]The convolution of two completely multiplicative functions is multiplicative, but not necessarily completely multiplicative.
In these formulas, we use the followingarithmetical functions:
The following relations hold:
This last identity shows that theprime-counting functionis given by the summatory function
whereM(x){\displaystyle M(x)}is theMertens functionandω{\displaystyle \omega }is the distinct prime factor counting function from above. This expansion follows from the identity for the sums over Dirichlet convolutions given on thedivisor sum identitiespage (a standard trick for these sums).[3]
Given an arithmetic functionf{\displaystyle f}its Dirichlet inverseg=f−1{\displaystyle g=f^{-1}}may be calculated recursively: the value ofg(n){\displaystyle g(n)}is in terms ofg(m){\displaystyle g(m)}form<n{\displaystyle m<n}.
Forn=1{\displaystyle n=1}:
Forn=2{\displaystyle n=2}:
Forn=3{\displaystyle n=3}:
Forn=4{\displaystyle n=4}:
and in general forn>1{\displaystyle n>1},
The following properties of the Dirichlet inverse hold:[4]
An exact, non-recursive formula for the Dirichlet inverse of anyarithmetic functionfis given inDivisor sum identities. A morepartition theoreticexpression for the Dirichlet inverse offis given by
The following formula provides a compact way of expressing the Dirichlet inverse of an invertible arithmetic functionf:
f−1=∑k=0+∞(f(1)ε−f)∗kf(1)k+1{\displaystyle f^{-1}=\sum _{k=0}^{+\infty }{\frac {(f(1)\varepsilon -f)^{*k}}{f(1)^{k+1}}}}
where the expression(f(1)ε−f)∗k{\displaystyle (f(1)\varepsilon -f)^{*k}}stands for the arithmetic functionf(1)ε−f{\displaystyle f(1)\varepsilon -f}convoluted with itselfktimes. Notice that, for a fixed positive integern{\displaystyle n}, ifk>Ω(n){\displaystyle k>\Omega (n)}then(f(1)ε−f)∗k(n)=0{\displaystyle (f(1)\varepsilon -f)^{*k}(n)=0}, this is becausef(1)ε(1)−f(1)=0{\displaystyle f(1)\varepsilon (1)-f(1)=0}and every way of expressingnas a product ofkpositive integers must include a 1, so the series on the right hand side converges for every fixed positive integern.
Iffis an arithmetic function, theDirichlet seriesgenerating functionis defined by
for thosecomplexargumentssfor which the series converges (if there are any). The multiplication of Dirichlet series is compatible with Dirichlet convolution in the following sense:
for allsfor which both series of the left hand side converge, one of them at least converging
absolutely (note that simple convergence of both series of the left hand sidedoes notimply convergence of the right hand side!). This is akin to theconvolution theoremif one thinks of Dirichlet series as aFourier transform.
The restriction of the divisors in the convolution tounitary,bi-unitaryor infinitary divisors defines similar commutative operations which share many features with the Dirichlet convolution (existence of a Möbius inversion, persistence of multiplicativity, definitions of totients, Euler-type product formulas over associated primes, etc.).
Dirichlet convolution is a special case of the convolution multiplication for theincidence algebraof aposet, in this case the poset of positive integers ordered by divisibility.
TheDirichlet hyperbola methodcomputes the summation of a convolution in terms of its functions and their summation functions.
|
https://en.wikipedia.org/wiki/Dirichlet_convolution
|
Design optimizationis an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages:[1][2]
The formal mathematical (standard form) statement of the design optimization problem is[3]
minimizef(x)subjecttohi(x)=0,i=1,…,m1gj(x)≤0,j=1,…,m2andx∈X⊆Rn{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h_{i}(x)=0,\quad i=1,\dots ,m_{1}\\&&&g_{j}(x)\leq 0,\quad j=1,\dots ,m_{2}\\&\operatorname {and} &&x\in X\subseteq R^{n}\end{aligned}}}
where
The problem formulation stated above is a convention called thenegative null form, since all constraint function are expressed as equalities and negative inequalities with zero on the right-hand side. This convention is used so that numerical algorithms developed to solve design optimization problems can assume a standard expression of the mathematical problem.
We can introduce the vector-valued functions
h=(h1,h2,…,hm1)andg=(g1,g2,…,gm2){\displaystyle {\begin{aligned}&&&{h=(h_{1},h_{2},\dots ,h_{m1})}\\\operatorname {and} \\&&&{g=(g_{1},g_{2},\dots ,g_{m2})}\end{aligned}}}
to rewrite the above statement in the compact expression
minimizef(x)subjecttoh(x)=0,g(x)≤0,x∈X⊆Rn{\displaystyle {\begin{aligned}&{\operatorname {minimize} }&&f(x)\\&\operatorname {subject\;to} &&h(x)=0,\quad g(x)\leq 0,\quad x\in X\subseteq R^{n}\\\end{aligned}}}
We callh,g{\displaystyle h,g}thesetorsystem of(functional)constraintsandX{\displaystyle X}theset constraint.
Design optimization applies the methods ofmathematical optimizationto design problem formulations and it is sometimes used interchangeably with the termengineering optimization. When the objective functionfis avectorrather than ascalar, the problem becomes amulti-objective optimizationone. If the design optimization problem has more than one mathematical solutions the methods ofglobal optimizationare used to identified the global optimum.
Optimization Checklist[2]
A detailed and rigorous description of the stages and practical applications with examples can be found in the bookPrinciples of Optimal Design.
Practical design optimization problems are typically solved numerically and manyoptimization softwareexist in academic and commercial forms.[4]There are several domain-specific applications of design optimization posing their own specific challenges in formulating and solving the resulting problems; these include,shape optimization,wing-shape optimization,topology optimization,architectural design optimization,power optimization. Several books, articles and journal publications are listed below for reference.
One modern application of design optimization is structural design optimization (SDO) is in building and construction sector. SDO emphasizes automating and optimizing structural designs and dimensions to satisfy a variety of performance objectives. These advancements aim to optimize the configuration and dimensions of structures to optimize augmenting strength, minimize material usage, reduce costs, enhance energy efficiency, improve sustainability, and optimize several other performance criteria. Concurrently, structural design automation endeavors to streamline the design process, mitigate human errors, and enhance productivity through computer-based tools and optimization algorithms. Prominent practices and technologies in this domain include the parametric design, generative design, building information modelling (BIM) technology, machine learning (ML), and artificial intelligence (AI), as well as integrating finite element analysis (FEA) with simulation tools.[5]
|
https://en.wikipedia.org/wiki/Design_Optimization
|
Inlinear algebra, aneigenvector(/ˈaɪɡən-/EYE-gən-) orcharacteristic vectoris avectorthat has itsdirectionunchanged (or reversed) by a givenlinear transformation. More precisely, an eigenvectorv{\displaystyle \mathbf {v} }of a linear transformationT{\displaystyle T}isscaled by a constant factorλ{\displaystyle \lambda }when the linear transformation is applied to it:Tv=λv{\displaystyle T\mathbf {v} =\lambda \mathbf {v} }. The correspondingeigenvalue,characteristic value, orcharacteristic rootis the multiplying factorλ{\displaystyle \lambda }(possibly negative).
Geometrically, vectorsare multi-dimensionalquantities with magnitude and direction, often pictured as arrows. A linear transformationrotates,stretches, orshearsthe vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed.[1]
The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, fromgeologytoquantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is thesteady stateof the system.
For ann×n{\displaystyle n{\times }n}matrixAand a nonzero vectorv{\displaystyle \mathbf {v} }of lengthn{\displaystyle n}, if multiplyingAbyv{\displaystyle \mathbf {v} }(denotedAv{\displaystyle A\mathbf {v} }) simply scalesv{\displaystyle \mathbf {v} }by a factorλ, whereλis ascalar, thenv{\displaystyle \mathbf {v} }is called an eigenvector ofA, andλis the corresponding eigenvalue. This relationship can be expressed as:Av=λv{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }.[2]
Given ann-dimensional vector spaceand a choice ofbasis, there is a direct correspondence between linear transformations from the vector space into itself andn-by-nsquare matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language ofmatrices.[3][4]
Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefixeigen-is adopted from theGermanwordeigen(cognatewith theEnglishwordown) for 'proper', 'characteristic', 'own'.[5][6]Originally used to studyprincipal axesof the rotational motion ofrigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example instability analysis,vibration analysis,atomic orbitals,facial recognition, andmatrix diagonalization.
In essence, an eigenvectorvof a linear transformationTis a nonzero vector that, whenTis applied to it, does not change direction. ApplyingTto the eigenvector only scales the eigenvector by the scalar valueλ, called an eigenvalue. This condition can be written as the equationT(v)=λv,{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}referred to as theeigenvalue equationoreigenequation. In general,λmay be anyscalar. For example,λmay be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero orcomplex.
The example here, based on theMona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called ashear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Pointsalongthe horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.
Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be adifferential operatorlikeddx{\displaystyle {\tfrac {d}{dx}}}, in which case the eigenvectors are functions calledeigenfunctionsthat are scaled by that differential operator, such asddxeλx=λeλx.{\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.}Alternatively, the linear transformation could take the form of annbynmatrix, in which case the eigenvectors arenby 1 matrices. If the linear transformation is expressed in the form of annbynmatrixA, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplicationAv=λv,{\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}where the eigenvectorvis annby 1 matrix. For a matrix, eigenvalues and eigenvectors can be used todecompose the matrix—for example bydiagonalizingit.
Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefixeigen-is applied liberally when naming them:
Eigenvalues are often introduced in the context oflinear algebraormatrix theory. Historically, however, they arose in the study ofquadratic formsanddifferential equations.
In the 18th century,Leonhard Eulerstudied the rotational motion of arigid body, and discovered the importance of theprincipal axes.[a]Joseph-Louis Lagrangerealized that the principal axes are the eigenvectors of the inertia matrix.[10]
In the early 19th century,Augustin-Louis Cauchysaw how their work could be used to classify thequadric surfaces, and generalized it to arbitrary dimensions.[11]Cauchy also coined the termracine caractéristique(characteristic root), for what is now calledeigenvalue; his term survives incharacteristic equation.[b]
Later,Joseph Fourierused the work of Lagrange andPierre-Simon Laplaceto solve theheat equationbyseparation of variablesin his 1822 treatiseThe Analytic Theory of Heat (Théorie analytique de la chaleur).[12]Charles-François Sturmelaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that realsymmetric matriceshave real eigenvalues.[11]This was extended byCharles Hermitein 1855 to what are now calledHermitian matrices.[13]
Around the same time,Francesco Brioschiproved that the eigenvalues oforthogonal matriceslie on theunit circle,[11]andAlfred Clebschfound the corresponding result forskew-symmetric matrices.[13]Finally,Karl Weierstrassclarified an important aspect in thestability theorystarted by Laplace, by realizing thatdefective matricescan cause instability.[11]
In the meantime,Joseph Liouvillestudied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now calledSturm–Liouville theory.[14]Schwarzstudied the first eigenvalue ofLaplace's equationon general domains towards the end of the 19th century, whilePoincaréstudiedPoisson's equationa few years later.[15]
At the start of the 20th century,David Hilbertstudied the eigenvalues ofintegral operatorsby viewing the operators as infinite matrices.[16]He was the first to use theGermanwordeigen, which means "own",[6]to denote eigenvalues and eigenvectors in 1904,[c]though he may have been following a related usage byHermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.[17]
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, whenRichard von Misespublished thepower method. One of the most popular methods today, theQR algorithm, was proposed independently byJohn G. F. Francis[18]andVera Kublanovskaya[19]in 1961.[20][21]
Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.[22][23]Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices,[3][4]which is especially common in numerical and computational applications.[24]
Considern-dimensional vectors that are formed as a list ofnscalars, such as the three-dimensional vectorsx=[1−34]andy=[−2060−80].{\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.}
These vectors are said to bescalar multiplesof each other, orparallelorcollinear, if there is a scalarλsuch thatx=λy.{\displaystyle \mathbf {x} =\lambda \mathbf {y} .}
In this case,λ=−120{\displaystyle \lambda =-{\frac {1}{20}}}.
Now consider the linear transformation ofn-dimensional vectors defined by annbynmatrixA,Av=w,{\displaystyle A\mathbf {v} =\mathbf {w} ,}or[A11A12⋯A1nA21A22⋯A2n⋮⋮⋱⋮An1An2⋯Ann][v1v2⋮vn]=[w1w2⋮wn]{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}}where, for each row,wi=Ai1v1+Ai2v2+⋯+Ainvn=∑j=1nAijvj.{\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.}
If it occurs thatvandware scalar multiples, that is if
thenvis aneigenvectorof the linear transformationAand the scale factorλis theeigenvaluecorresponding to that eigenvector. Equation (1) is theeigenvalue equationfor the matrixA.
Equation (1) can be stated equivalently as
whereIis thenbynidentity matrixand0is the zero vector.
Equation (2) has a nonzero solutionvif and only ifthedeterminantof the matrix(A−λI)is zero. Therefore, the eigenvalues ofAare values ofλthat satisfy the equation
Using theLeibniz formula for determinants, the left-hand side of equation (3) is apolynomialfunction of the variableλand thedegreeof this polynomial isn, the order of the matrixA. Itscoefficientsdepend on the entries ofA, except that its term of degreenis always (−1)nλn. This polynomial is called thecharacteristic polynomialofA. Equation (3) is called thecharacteristic equationor thesecular equationofA.
Thefundamental theorem of algebraimplies that the characteristic polynomial of ann-by-nmatrixA, being a polynomial of degreen, can befactoredinto the product ofnlinear terms,
where eachλimay be real but in general is a complex number. The numbersλ1,λ2, ...,λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues ofA.
As a brief example, which is described in more detail in the examples section later, consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
Taking the determinant of(A−λI), the characteristic polynomial ofAisdet(A−λI)=|2−λ112−λ|=3−4λ+λ2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.}
Setting the characteristic polynomial equal to zero, it has roots atλ=1andλ=3, which are the two eigenvalues ofA. The eigenvectors corresponding to each eigenvalue can be found by solving for the components ofvin the equation(A−λI)v=0{\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} }.In this example, the eigenvectors are any nonzero scalar multiples ofvλ=1=[1−1],vλ=3=[11].{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.}
If the entries of the matrixAare all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may beirrational numberseven if all the entries ofAarerational numbersor even if they are all integers. However, if the entries ofAare allalgebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers.
The non-real roots of a real polynomial with real coefficients can be grouped into pairs ofcomplex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by theintermediate value theoremat least one of the roots is real. Therefore, anyreal matrixwith odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.
Thespectrumof a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities.
An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as thespectral radiusof the matrix.
Letλibe an eigenvalue of annbynmatrixA. Thealgebraic multiplicityμA(λi) of the eigenvalue is itsmultiplicity as a rootof the characteristic polynomial, that is, the largest integerksuch that (λ−λi)kdivides evenlythat polynomial.[9][25][26]
Suppose a matrixAhas dimensionnandd≤ndistinct eigenvalues. Whereas equation (4) factors the characteristic polynomial ofAinto the product ofnlinear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product ofdterms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,det(A−λI)=(λ1−λ)μA(λ1)(λ2−λ)μA(λ2)⋯(λd−λ)μA(λd).{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.}
Ifd=nthen the right-hand side is the product ofnlinear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimensionnas1≤μA(λi)≤n,μA=∑i=1dμA(λi)=n.{\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}}
IfμA(λi) = 1, thenλiis said to be asimple eigenvalue.[26]IfμA(λi) equals the geometric multiplicity ofλi,γA(λi), defined in the next section, thenλiis said to be asemisimple eigenvalue.
Given a particular eigenvalueλof thenbynmatrixA, define thesetEto be all vectorsvthat satisfy equation (2),E={v:(A−λI)v=0}.{\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.}
On one hand, this set is precisely thekernelor nullspace of the matrix(A−λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector ofAassociated withλ. So, the setEis theunionof the zero vector with the set of all eigenvectors ofAassociated withλ, andEequals the nullspace of(A−λI).Eis called theeigenspaceorcharacteristic spaceofAassociated withλ.[27][9]In generalλis a complex number and the eigenvectors are complexnby 1 matrices. A property of the nullspace is that it is alinear subspace, soEis a linear subspace ofCn{\displaystyle \mathbb {C} ^{n}}.
Because the eigenspaceEis a linear subspace, it isclosedunder addition. That is, if two vectorsuandvbelong to the setE, writtenu,v∈E, then(u+v) ∈Eor equivalentlyA(u+v) =λ(u+v). This can be checked using thedistributive propertyof matrix multiplication. Similarly, becauseEis a linear subspace, it is closed under scalar multiplication. That is, ifv∈Eandαis a complex number,(αv) ∈Eor equivalentlyA(αv) =λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers iscommutative. As long asu+vandαvare not zero, they are also eigenvectors ofAassociated withλ.
The dimension of the eigenspaceEassociated withλ, or equivalently the maximum number of linearly independent eigenvectors associated withλ, is referred to as the eigenvalue'sgeometric multiplicityγA(λ){\displaystyle \gamma _{A}(\lambda )}. BecauseEis also the nullspace of(A−λI), the geometric multiplicity ofλis the dimension of the nullspace of(A−λI),also called thenullityof(A−λI),which relates to the dimension and rank of(A−λI)asγA(λ)=n−rank(A−λI).{\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).}
Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceedn.1≤γA(λ)≤μA(λ)≤n{\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n}
To prove the inequalityγA(λ)≤μA(λ){\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )}, consider how the definition of geometric multiplicity implies the existence ofγA(λ){\displaystyle \gamma _{A}(\lambda )}orthonormaleigenvectorsv1,…,vγA(λ){\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}}, such thatAvk=λvk{\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}}. We can therefore find a (unitary) matrixVwhose firstγA(λ){\displaystyle \gamma _{A}(\lambda )}columns are these eigenvectors, and whose remaining columns can be any orthonormal set ofn−γA(λ){\displaystyle n-\gamma _{A}(\lambda )}vectors orthogonal to these eigenvectors ofA. ThenVhas full rank and is therefore invertible. EvaluatingD:=VTAV{\displaystyle D:=V^{T}AV}, we get a matrix whose top left block is the diagonal matrixλIγA(λ){\displaystyle \lambda I_{\gamma _{A}(\lambda )}}. This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding−ξV{\displaystyle -\xi V}on both sides, we get(A−ξI)V=V(D−ξI){\displaystyle (A-\xi I)V=V(D-\xi I)}sinceIcommutes withV. In other words,A−ξI{\displaystyle A-\xi I}is similar toD−ξI{\displaystyle D-\xi I}, anddet(A−ξI)=det(D−ξI){\displaystyle \det(A-\xi I)=\det(D-\xi I)}. But from the definition ofD, we know thatdet(D−ξI){\displaystyle \det(D-\xi I)}contains a factor(ξ−λ)γA(λ){\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}}, which means that the algebraic multiplicity ofλ{\displaystyle \lambda }must satisfyμA(λ)≥γA(λ){\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )}.
SupposeAhasd≤n{\displaystyle d\leq n}distinct eigenvaluesλ1,…,λd{\displaystyle \lambda _{1},\ldots ,\lambda _{d}}, where the geometric multiplicity ofλi{\displaystyle \lambda _{i}}isγA(λi){\displaystyle \gamma _{A}(\lambda _{i})}. The total geometric multiplicity ofA,γA=∑i=1dγA(λi),d≤γA≤n,{\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}}is the dimension of thesumof all the eigenspaces ofA's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors ofA. IfγA=n{\displaystyle \gamma _{A}=n}, then
LetA{\displaystyle A}be an arbitraryn×n{\displaystyle n\times n}matrix of complex numbers with eigenvaluesλ1,…,λn{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}. Each eigenvalue appearsμA(λi){\displaystyle \mu _{A}(\lambda _{i})}times in this list, whereμA(λi){\displaystyle \mu _{A}(\lambda _{i})}is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:
Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to aright eigenvector, namely acolumnvector thatrightmultiplies then×n{\displaystyle n\times n}matrixA{\displaystyle A}in the defining equation, equation (1),Av=λv.{\displaystyle A\mathbf {v} =\lambda \mathbf {v} .}
The eigenvalue and eigenvector problem can also be defined forrowvectors thatleftmultiply matrixA{\displaystyle A}. In this formulation, the defining equation isuA=κu,{\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,}
whereκ{\displaystyle \kappa }is a scalar andu{\displaystyle u}is a1×n{\displaystyle 1\times n}matrix. Any row vectoru{\displaystyle u}satisfying this equation is called aleft eigenvectorofA{\displaystyle A}andκ{\displaystyle \kappa }is its associated eigenvalue. Taking the transpose of this equation,ATuT=κuT.{\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.}
Comparing this equation to equation (1), it follows immediately that a left eigenvector ofA{\displaystyle A}is the same as the transpose of a right eigenvector ofAT{\displaystyle A^{\textsf {T}}}, with the same eigenvalue. Furthermore, since the characteristic polynomial ofAT{\displaystyle A^{\textsf {T}}}is the same as the characteristic polynomial ofA{\displaystyle A}, the left and right eigenvectors ofA{\displaystyle A}are associated with the same eigenvalues.
Suppose the eigenvectors ofAform a basis, or equivalentlyAhasnlinearly independent eigenvectorsv1,v2, ...,vnwith associated eigenvaluesλ1,λ2, ...,λn. The eigenvalues need not be distinct. Define asquare matrixQwhose columns are thenlinearly independent eigenvectors ofA,
Since each column ofQis an eigenvector ofA, right multiplyingAbyQscales each column ofQby its associated eigenvalue,
With this in mind, define a diagonal matrix Λ where each diagonal element Λiiis the eigenvalue associated with theith column ofQ. Then
Because the columns ofQare linearly independent, Q is invertible. Right multiplying both sides of the equation byQ−1,
or by instead left multiplying both sides byQ−1,
Acan therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called theeigendecompositionand it is asimilarity transformation. Such a matrixAis said to besimilarto the diagonal matrix Λ ordiagonalizable. The matrixQis the change of basis matrix of the similarity transformation. Essentially, the matricesAand Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ.
Conversely, suppose a matrixAis diagonalizable. LetPbe a non-singular square matrix such thatP−1APis some diagonal matrixD. Left multiplying both byP,AP=PD. Each column ofPmust therefore be an eigenvector ofAwhose eigenvalue is the corresponding diagonal element ofD. Since the columns ofPmust be linearly independent forPto be invertible, there existnlinearly independent eigenvectors ofA. It then follows that the eigenvectors ofAform a basis if and only ifAis diagonalizable.
A matrix that is not diagonalizable is said to bedefective. For defective matrices, the notion of eigenvectors generalizes togeneralized eigenvectorsand the diagonal matrix of eigenvalues generalizes to theJordan normal form. Over an algebraically closed field, any matrixAhas aJordan normal formand therefore admits a basis of generalized eigenvectors and a decomposition intogeneralized eigenspaces.
In theHermitiancase, eigenvalues can be given a variational characterization. The largest eigenvalue ofH{\displaystyle H}is the maximum value of thequadratic formxTHx/xTx{\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} }. A value ofx{\displaystyle \mathbf {x} }that realizes that maximum is an eigenvector.
Consider the matrixA=[2112].{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectorsvof this transformation satisfy equation (1), and the values ofλfor which the determinant of the matrix (A−λI) equals zero are the eigenvalues.
Taking the determinant to find characteristic polynomial ofA,det(A−λI)=|[2112]−λ[1001]|=|2−λ112−λ|=3−4λ+λ2=(λ−3)(λ−1).{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}}
Setting the characteristic polynomial equal to zero, it has roots atλ=1andλ=3, which are the two eigenvalues ofA.
Forλ=1, equation (2) becomes,(A−I)vλ=1=[1111][v1v2]=[00]{\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}}1v1+1v2=0{\displaystyle 1v_{1}+1v_{2}=0}
Any nonzero vector withv1= −v2solves this equation. Therefore,vλ=1=[v1−v1]=[1−1]{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}}is an eigenvector ofAcorresponding toλ= 1, as is any scalar multiple of this vector.
Forλ=3, equation (2) becomes(A−3I)vλ=3=[−111−1][v1v2]=[00]−1v1+1v2=0;1v1−1v2=0{\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}}
Any nonzero vector withv1=v2solves this equation. Therefore,vλ=3=[v1v1]=[11]{\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}}
is an eigenvector ofAcorresponding toλ= 3, as is any scalar multiple of this vector.
Thus, the vectorsvλ=1andvλ=3are eigenvectors ofAassociated with the eigenvaluesλ=1andλ=3, respectively.
Consider the matrixA=[200034049].{\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.}
The characteristic polynomial ofAisdet(A−λI)=|[200034049]−λ[100010001]|=|2−λ0003−λ4049−λ|,=(2−λ)[(3−λ)(9−λ)−16]=−λ3+14λ2−35λ+22.{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}}
The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues ofA. These eigenvalues correspond to the eigenvectors[100]T{\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}},[0−21]T{\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}},and[012]T{\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}},or any nonzero multiple thereof.
Consider thecyclic permutation matrixA=[010001100].{\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.}
This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 −λ3, whose roots areλ1=1λ2=−12+i32λ3=λ2∗=−12−i32{\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}}wherei{\displaystyle i}is animaginary unitwithi2=−1{\displaystyle i^{2}=-1}.
For the real eigenvalueλ1= 1, any vector with three equal nonzero entries is an eigenvector. For example,A[555]=[555]=1⋅[555].{\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.}
For the complex conjugate pair of imaginary eigenvalues,λ2λ3=1,λ22=λ3,λ32=λ2.{\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.}
ThenA[1λ2λ3]=[λ2λ31]=λ2⋅[1λ2λ3],{\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},}andA[1λ3λ2]=[λ3λ21]=λ3⋅[1λ3λ2].{\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.}
Therefore, the other two eigenvectors ofAare complex and arevλ2=[1λ2λ3]T{\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}}andvλ3=[1λ3λ2]T{\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}}with eigenvaluesλ2andλ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,vλ2=vλ3∗.{\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.}
Matrices with entries only along the main diagonal are calleddiagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrixA=[100020003].{\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.}
The characteristic polynomial ofAisdet(A−λI)=(1−λ)(2−λ)(3−λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the rootsλ1= 1,λ2= 2, andλ3= 3. These roots are the diagonal elements as well as the eigenvalues ofA.
Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,vλ1=[100],vλ2=[010],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
A matrix whose elements above the main diagonal are all zero is called alowertriangular matrix, while a matrix whose elements below the main diagonal are all zero is called anupper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal.
Consider the lower triangular matrix,A=[100120233].{\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.}
The characteristic polynomial ofAisdet(A−λI)=(1−λ)(2−λ)(3−λ),{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the rootsλ1= 1,λ2= 2, andλ3= 3. These roots are the diagonal elements as well as the eigenvalues ofA.
These eigenvalues correspond to the eigenvectors,vλ1=[1−112],vλ2=[01−3],vλ3=[001],{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
As in the previous example, the lower triangular matrixA=[2000120001300013],{\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},}has a characteristic polynomial that is the product of its diagonal elements,det(A−λI)=|2−λ00012−λ00013−λ00013−λ|=(2−λ)2(3−λ)2.{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.}
The roots of this polynomial, and hence the eigenvalues, are 2 and 3. Thealgebraic multiplicityof each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues isμA= 4 =n, the order of the characteristic polynomial and the dimension ofA.
On the other hand, thegeometric multiplicityof the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector[01−11]T{\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}}and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector[0001]T{\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}}. The total geometric multiplicityγAis 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.
For aHermitian matrix, the norm squared of thejth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the correspondingminor matrix,|vi,j|2=∏k(λi−λk(Mj))∏k≠i(λi−λk),{\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},}whereMj{\textstyle M_{j}}is thesubmatrixformed by removing thejth row and column from the original matrix.[33][34][35]This identity also extends todiagonalizable matrices, and has been rediscovered many times in the literature.[34][36]
The definitions of eigenvalue and eigenvectors of a linear transformationTremains valid even if the underlying vector space is an infinite-dimensionalHilbertorBanach space. A widely used class of linear transformations acting on infinite-dimensional spaces are thedifferential operatorsonfunction spaces. LetDbe a linear differential operator on the spaceC∞of infinitelydifferentiablereal functions of a real argumentt. The eigenvalue equation forDis thedifferential equationDf(t)=λf(t){\displaystyle Df(t)=\lambda f(t)}
The functions that satisfy this equation are eigenvectors ofDand are commonly calledeigenfunctions.
Consider the derivative operatorddt{\displaystyle {\tfrac {d}{dt}}}with eigenvalue equationddtf(t)=λf(t).{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).}
This differential equation can be solved by multiplying both sides bydt/f(t) andintegrating. Its solution, theexponential functionf(t)=f(0)eλt,{\displaystyle f(t)=f(0)e^{\lambda t},}is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, forλ= 0 the eigenfunctionf(t) is a constant.
The maineigenfunctionarticle gives other examples.
The concept of eigenvalues and eigenvectors extends naturally to arbitrarylinear transformationson arbitrary vector spaces. LetVbe any vector space over somefieldKofscalars, and letTbe a linear transformation mappingVintoV,T:V→V.{\displaystyle T:V\to V.}
We say that a nonzero vectorv∈Vis aneigenvectorofTif and only if there exists a scalarλ∈Ksuch that
This equation is called the eigenvalue equation forT, and the scalarλis theeigenvalueofTcorresponding to the eigenvectorv.T(v) is the result of applying the transformationTto the vectorv, whileλvis the product of the scalarλwithv.[37][38]
Given an eigenvalueλ, consider the setE={v:T(v)=λv},{\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},}
which is the union of the zero vector with the set of all eigenvectors associated withλ.Eis called theeigenspaceorcharacteristic spaceofTassociated withλ.[39]
By definition of a linear transformation,T(x+y)=T(x)+T(y),T(αx)=αT(x),{\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}}
forx,y∈Vandα∈K. Therefore, ifuandvare eigenvectors ofTassociated with eigenvalueλ, namelyu,v∈E, thenT(u+v)=λ(u+v),T(αv)=λ(αv).{\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}}
So, bothu+vand αvare either zero or eigenvectors ofTassociated withλ, namelyu+v,αv∈E, andEis closed under addition and scalar multiplication. The eigenspaceEassociated withλis therefore a linear subspace ofV.[40]If that subspace has dimension 1, it is sometimes called aneigenline.[41]
Thegeometric multiplicityγT(λ) of an eigenvalueλis the dimension of the eigenspace associated withλ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue.[9][26][42]By the definition of eigenvalues and eigenvectors,γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.
The eigenspaces ofTalways form adirect sum. As a consequence, eigenvectors ofdifferenteigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimensionnof the vector space on whichToperates, and there cannot be more thanndistinct eigenvalues.[d]
Any subspace spanned by eigenvectors ofTis aninvariant subspaceofT, and the restriction ofTto such a subspace is diagonalizable. Moreover, if the entire vector spaceVcan be spanned by the eigenvectors ofT, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues ofTis the entire vector spaceV, then a basis ofVcalled aneigenbasiscan be formed from linearly independent eigenvectors ofT. WhenTadmits an eigenbasis,Tis diagonalizable.
Ifλis an eigenvalue ofT, then the operator (T−λI) is notone-to-one, and therefore its inverse (T−λI)−1does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T−λI) may not have an inverse even ifλis not an eigenvalue.
For this reason, infunctional analysiseigenvalues can be generalized to thespectrum of a linear operatorTas the set of all scalarsλfor which the operator (T−λI) has noboundedinverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.
One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with analgebra representation– anassociative algebraacting on amodule. The study of such actions is the field ofrepresentation theory.
Therepresentation-theoretical concept of weightis an analog of eigenvalues, whileweight vectorsandweight spacesare the analogs of eigenvectors and eigenspaces, respectively.
Hecke eigensheafis a tensor-multiple of itself and is considered inLanglands correspondence.
The simplestdifference equationshave the form
The solution of this equation forxin terms oftis found by using its characteristic equation
which can be found by stacking into matrix form a set of equations consisting of the above difference equation and thek– 1 equationsxt−1=xt−1,…,xt−k+1=xt−k+1,{\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},}giving ak-dimensional system of the first order in the stacked variable vector[xt⋯xt−k+1]{\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}}in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation giveskcharacteristic rootsλ1,…,λk,{\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},}for use in the solution equation
A similar procedure is used for solving adifferential equationof the form
The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice.
The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such asfloating-point.
The eigenvalues of a matrixA{\displaystyle A}can be determined by finding the roots of the characteristic polynomial. This is easy for2×2{\displaystyle 2\times 2}matrices, but the difficulty increases rapidly with the size of the matrix.
In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any requiredaccuracy.[43]However, this approach is not viable in practice because the coefficients would be contaminated by unavoidableround-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified byWilkinson's polynomial).[43]Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is thedeterminant, which for ann×n{\displaystyle n\times n}matrix is a sum ofn!{\displaystyle n!}different products.[e]
Explicitalgebraic formulasfor the roots of a polynomial exist only if the degreen{\displaystyle n}is 4 or less. According to theAbel–Ruffini theoremthere is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degreen{\displaystyle n}is the characteristic polynomial of somecompanion matrixof ordern{\displaystyle n}.) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximatenumerical methods. Even theexact formulafor the roots of a degree 3 polynomial is numerically impractical.
Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes asystem of linear equationswith known coefficients. For example, once it is known that 6 is an eigenvalue of the matrixA=[4163]{\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}}
we can find its eigenvectors by solving the equationAv=6v{\displaystyle Av=6v}, that is[4163][xy]=6⋅[xy]{\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}}
This matrix equation is equivalent to twolinear equations{4x+y=6x6x+3y=6y{\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.}that is{−2x+y=06x−3y=0{\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.}
Both equations reduce to the single linear equationy=2x{\displaystyle y=2x}. Therefore, any vector of the form[a2a]T{\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}}, for any nonzero real numbera{\displaystyle a}, is an eigenvector ofA{\displaystyle A}with eigenvalueλ=6{\displaystyle \lambda =6}.
The matrixA{\displaystyle A}above has another eigenvalueλ=1{\displaystyle \lambda =1}. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of3x+y=0{\displaystyle 3x+y=0}, that is, any vector of the form[b−3b]T{\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}}, for any nonzero real numberb{\displaystyle b}.
The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector.A variationis to instead multiply the vector by(A−μI)−1{\displaystyle (A-\mu I)^{-1}};this causes it to converge to an eigenvector of the eigenvalue closest toμ∈C{\displaystyle \mu \in \mathbb {C} }.
Ifv{\displaystyle \mathbf {v} }is (a good approximation of) an eigenvector ofA{\displaystyle A}, then the corresponding eigenvalue can be computed as
wherev∗{\displaystyle \mathbf {v} ^{*}}denotes theconjugate transposeofv{\displaystyle \mathbf {v} }.
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until theQR algorithmwas designed in 1961.[43]Combining theHouseholder transformationwith the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed]For largeHermitiansparse matrices, theLanczos algorithmis one example of an efficientiterative methodto compute eigenvalues and eigenvectors, among several other possibilities.[43]
Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed.
Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes.
The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors.
The characteristic equation for a rotation is aquadratic equationwithdiscriminantD=−4(sinθ)2{\displaystyle D=-4(\sin \theta )^{2}}, which is a negative number wheneverθis not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers,cosθ±isinθ{\displaystyle \cos \theta \pm i\sin \theta }; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
A linear transformation that takes a square to a rectangle of the same area (asqueeze mapping) has reciprocal eigenvalues.
Theeigendecompositionof asymmetricpositive semidefinite(PSD)matrixyields anorthogonal basisof eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used inmultivariate analysis, where thesamplecovariance matricesare PSD. This orthogonal decomposition is calledprincipal component analysis(PCA) in statistics. PCA studieslinear relationsamong variables. PCA is performed on thecovariance matrixor thecorrelation matrix(in which each variable is scaled to have itssample varianceequal to one). For the covariance or correlation matrix, the eigenvectors correspond toprincipal componentsand the eigenvalues to thevariance explainedby the principal components. Principal component analysis of the correlation matrix provides anorthogonal basisfor the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data.
Principal component analysis is used as a means ofdimensionality reductionin the study of largedata sets, such as those encountered inbioinformatics. InQ methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment ofpracticalsignificance (which differs from thestatistical significanceofhypothesis testing; cf.criteria for determining the number of factors). More generally, principal component analysis can be used as a method offactor analysisinstructural equation modeling.
Inspectral graph theory, an eigenvalue of agraphis defined as an eigenvalue of the graph'sadjacency matrixA{\displaystyle A}, or (increasingly) of the graph'sLaplacian matrixdue to itsdiscrete Laplace operator, which is eitherD−A{\displaystyle D-A}(sometimes called thecombinatorial Laplacian) orI−D−1/2AD−1/2{\displaystyle I-D^{-1/2}AD^{-1/2}}(sometimes called thenormalized Laplacian), whereD{\displaystyle D}is a diagonal matrix withDii{\displaystyle D_{ii}}equal to the degree of vertexvi{\displaystyle v_{i}}, and inD−1/2{\displaystyle D^{-1/2}}, thei{\displaystyle i}th diagonal entry is1/deg(vi){\textstyle 1/{\sqrt {\deg(v_{i})}}}. Thek{\displaystyle k}th principal eigenvector of a graph is defined as either the eigenvector corresponding to thek{\displaystyle k}th largest ork{\displaystyle k}th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
The principal eigenvector is used to measure thecentralityof its vertices. An example isGoogle'sPageRankalgorithm. The principal eigenvector of a modifiedadjacency matrixof the World Wide Web graph gives the page ranks as its components. This vector corresponds to thestationary distributionof theMarkov chainrepresented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, viaspectral clustering. Other methods are also available for clustering.
AMarkov chainis represented by a matrix whose entries are thetransition probabilitiesbetween states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. ThePerron–Frobenius theoremgives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with manydegrees of freedom. The eigenvalues are thenatural frequencies(oreigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed bymx¨+kx=0{\displaystyle m{\ddot {x}}+kx=0}ormx¨=−kx{\displaystyle m{\ddot {x}}=-kx}
That is, acceleration is proportional to position (i.e., we expectx{\displaystyle x}to be sinusoidal in time).
Inn{\displaystyle n}dimensions,m{\displaystyle m}becomes amass matrixandk{\displaystyle k}astiffness matrix. Admissible solutions are then a linear combination of solutions to thegeneralized eigenvalue problemkx=ω2mx{\displaystyle kx=\omega ^{2}mx}whereω2{\displaystyle \omega ^{2}}is the eigenvalue andω{\displaystyle \omega }is the (imaginary)angular frequency. The principalvibration modesare different from the principal compliance modes, which are the eigenvectors ofk{\displaystyle k}alone. Furthermore,damped vibration, governed bymx¨+cx˙+kx=0{\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0}leads to a so-calledquadratic eigenvalue problem,(ω2m+ωc+k)x=0.{\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.}
This can be reduced to a generalized eigenvalue problem byalgebraic manipulationat the cost of solving a larger system.
The orthogonality properties of the eigenvectors allows decoupling of thedifferential equationsso that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved usingfinite element analysis, but neatly generalize the solution to scalar-valued vibration problems.
Inmechanics, the eigenvectors of themoment of inertia tensordefine theprincipal axesof arigid body. Thetensorof moment ofinertiais a key quantity required to determine the rotation of a rigid body around itscenter of mass.
Insolid mechanics, thestresstensor is symmetric and so can be decomposed into adiagonaltensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has noshearcomponents; the components it does have are the principal components.
An example of an eigenvalue equation where the transformationT{\displaystyle T}is represented in terms of a differential operator is the time-independentSchrödinger equationinquantum mechanics:
whereH{\displaystyle H}, theHamiltonian, is a second-orderdifferential operatorandψE{\displaystyle \psi _{E}}, thewavefunction, is one of its eigenfunctions corresponding to the eigenvalueE{\displaystyle E}, interpreted as itsenergy.
However, in the case where one is interested only in thebound statesolutions of the Schrödinger equation, one looks forψE{\displaystyle \psi _{E}}within the space ofsquare integrablefunctions. Since this space is aHilbert spacewith a well-definedscalar product, one can introduce abasis setin whichψE{\displaystyle \psi _{E}}andH{\displaystyle H}can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
Thebra–ket notationis often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by|ΨE⟩{\displaystyle |\Psi _{E}\rangle }. In this notation, the Schrödinger equation is:
where|ΨE⟩{\displaystyle |\Psi _{E}\rangle }is aneigenstateofH{\displaystyle H}andE{\displaystyle E}represents the eigenvalue.H{\displaystyle H}is anobservableself-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation aboveH|ΨE⟩{\displaystyle H|\Psi _{E}\rangle }is understood to be the vector obtained by application of the transformationH{\displaystyle H}to|ΨE⟩{\displaystyle |\Psi _{E}\rangle }.
Light,acoustic waves, andmicrowavesare randomlyscatterednumerous times when traversing a staticdisordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrixt{\displaystyle \mathbf {t} }.[44][45]The eigenvectors of the transmission operatort†t{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,τ{\displaystyle \tau }, oft†t{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution withτmax=1{\displaystyle \tau _{\max }=1}andτmin=0{\displaystyle \tau _{\min }=0}.[45]Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.[46]
Inquantum mechanics, and in particular inatomicandmolecular physics, within theHartree–Focktheory, theatomicandmolecular orbitalscan be defined by the eigenvectors of theFock operator. The corresponding eigenvalues are interpreted asionization potentialsviaKoopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by aniterationprocedure, called in this caseself-consistent fieldmethod. Inquantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonalbasis set. This particular representation is ageneralized eigenvalue problemcalledRoothaan equations.
Ingeology, especially in the study ofglacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of aclast'sfabriccan be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as astereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,.[47][48]A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used incrystallographyto createstereograms.[49]
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are orderedv1,v2,v3{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}}by their eigenvaluesE1≥E2≥E3{\displaystyle E_{1}\geq E_{2}\geq E_{3}};[50]v1{\displaystyle \mathbf {v} _{1}}then is the primary orientation/dip of clast,v2{\displaystyle \mathbf {v} _{2}}is the secondary andv3{\displaystyle \mathbf {v} _{3}}is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on acompass roseof360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values ofE1{\displaystyle E_{1}},E2{\displaystyle E_{2}}, andE3{\displaystyle E_{3}}are dictated by the nature of the sediment's fabric. IfE1=E2=E3{\displaystyle E_{1}=E_{2}=E_{3}}, the fabric is said to be isotropic. IfE1=E2>E3{\displaystyle E_{1}=E_{2}>E_{3}}, the fabric is said to be planar. IfE1>E2>E3{\displaystyle E_{1}>E_{2}>E_{3}}, the fabric is said to be linear.[51]
The basic reproduction number (R0{\displaystyle R_{0}}) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, thenR0{\displaystyle R_{0}}is the average number of people that one typical infectious person will infect. The generation time of an infection is the time,tG{\displaystyle t_{G}}, from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after timetG{\displaystyle t_{G}}has passed. The valueR0{\displaystyle R_{0}}is then the largest eigenvalue of the next generation matrix.[52][53]
Inimage processing, processed images of faces can be seen as vectors whose components are thebrightnessesof eachpixel.[54]The dimension of this vector space is the number of pixels. The eigenvectors of thecovariance matrixassociated with a large set of normalized pictures of faces are calledeigenfaces; this is an example ofprincipal component analysis. They are very useful for expressing any face image as alinear combinationof some of them. In thefacial recognitionbranch ofbiometrics, eigenfaces provide a means of applyingdata compressionto faces foridentificationpurposes. Research related to eigen vision systems determining hand gestures has also been made.
Similar to this concept,eigenvoicesrepresent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation.
Wikiversity uses introductory physics to introduceEigenvalues and eigenvectors
|
https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors
|
Data Integrity Field(DIF) is an approach to protectdata integrityincomputer data storagefromdata corruption. It was proposed in 2003 by theT10 subcommitteeof theInternational Committee for Information Technology Standards.[1]A similar approach for data integrity was added in 2016 to the NVMe 1.2.1 specification.[2]
Packet-based storage transport protocols haveCRCprotection on command and data payloads. Interconnect buses have parity protection. Memory systems have parity detection/correction schemes. I/O protocol controllers at the transport/interconnect boundaries have internal data path protection.
Data availability in storage systems is frequently measured simply in terms of the reliability of the hardware components and the effects of redundant hardware. But the reliability of the software, its ability to detect errors, and its ability to correctly report or apply corrective actions to a failure have a significant bearing on the overall storage system availability.
The data exchange usually takes place between the host CPU and storage disk. There may be a storage data controller in between these two. The controller could beRAIDcontroller or simple storage switches.
DIF included extending thedisk sectorfrom its traditional 512 bytes, to 520 bytes, by adding eight additional protection bytes.[1]This extended sector is defined forSmall Computer System Interface(SCSI) devices, which is in turn used in many enterprise storage technologies, such asFibre Channel.[3]Oracle Corporationincluded support for DIF in theLinux kernel.[4][5]
An evolution of this technology called T10 Protection Information was introduced in 2011.[6][7]
|
https://en.wikipedia.org/wiki/Data_Integrity_Field
|
Quantum networksform an important element ofquantum computingandquantum communicationsystems. Quantum networks facilitate the transmission of information in the form of quantum bits, also calledqubits, between physically separated quantum processors. A quantum processor is a machine able to performquantum circuitson a certain number of qubits. Quantum networks work in a similar way to classical networks. The main difference is that quantum networking, like quantum computing, is better at solving certain problems, such as modeling quantum systems.
Networkedquantum computingor distributed quantum computing[1][2]works by linking multiple quantum processors through a quantum network by sending qubits in between them. Doing this creates a quantum computing cluster and therefore creates more computing potential. Less powerful computers can be linked in this way to create one more powerful processor. This is analogous to connecting several classical computers to form acomputer clusterin classical computing. Like classical computing, this system is scalable by adding more and more quantum computers to the network. Currently quantum processors are only separated by short distances.
In the realm ofquantum communication, one wants to sendqubitsfrom one quantumprocessorto another over long distances.[3]This way, local quantum networks can be intra connected into a quantuminternet. A quantum internet[1]supports many applications, which derive their power from the fact that by creatingquantum entangledqubits, information can be transmitted between the remote quantum processors. Most applications of a quantum internet require only very modest quantum processors. For most quantum internet protocols, such asquantum key distributioninquantum cryptography, it is sufficient if these processors are capable of preparing and measuring only a single qubit at a time. This is in contrast toquantum computingwhere interesting applications can be realized only if the (combined) quantum processors can easily simulate more qubits than a classical computer (around 60[4]). Quantum internet applications require only small quantum processors, often just a single qubit, because quantum entanglement can already be realized between just two qubits. A simulation of an entangled quantum system on a classical computer cannot simultaneously provide the same security and speed.
The basic structure of a quantum network and more generally a quantum internet is analogous to a classical network. First, we have end nodes on which applications are ultimately run. These end nodes are quantum processors of at least one qubit. Some applications of a quantum internet require quantum processors of several qubits as well as a quantum memory at the end nodes.
Second, to transport qubits from one node to another, we need communication lines. For the purpose of quantum communication, standardtelecomfibers can be used. For networked quantum computing, in which quantum processors are linked at short distances, different wavelengths are chosen depending on the exact hardware platform of the quantum processor.
Third, to make maximum use of communication infrastructure, one requiresoptical switchescapable of delivering qubits to the intended quantum processor. These switches need to preservequantum coherence, which makes them more challenging to realize than standard optical switches.
Finally, one requires a quantumrepeaterto transport qubits over long distances. Repeaters appear in between end nodes.[5]Since qubits cannot be copied (No-cloning theorem), classical signal amplification is not possible. By necessity, a quantum repeater works in a fundamentally different way than a classical repeater.
End nodes can both receive and emit information.[5]Telecommunication lasers andparametric down-conversioncombined with photodetectors can be used forquantum key distribution. In this case, the end nodes can in many cases be very simple devices consisting only ofbeamsplittersand photodetectors.
However, for many protocols more sophisticated end nodes are desirable. These systems provide advanced processing capabilities and can also be used as quantum repeaters. Their chief advantage is that they can store and retransmit quantum information without disrupting the underlyingquantum state. The quantum state being stored can either be the relative spin of an electron in a magnetic field or the energy state of an electron.[5]They can also performquantum logic gates.
One way of realizing such end nodes is by using color centers in diamond, such as thenitrogen-vacancy center. This system forms a small quantum processor featuring severalqubits. NV centers can be utilized at room temperatures.[5]Small scale quantum algorithms and quantum error correction[6]has already been demonstrated in this system, as well as the ability to entangle two[7]and three[8]quantum processors, and perform deterministicquantum teleportation.[9]
Another possible platform are quantum processors based onion traps, which utilize radio-frequency magnetic fields and lasers.[5]In a multispecies trapped-ion node network, photons entangled with a parent atom are used to entangle different nodes.[10]Also, cavity quantum electrodynamics (Cavity QED) is one possible method of doing this. In Cavity QED, photonic quantum states can be transferred to and from atomic quantum states stored in single atoms contained in optical cavities. This allows for the transfer of quantum states between single atoms usingoptical fiberin addition to the creation of remoteentanglementbetween distant atoms.[5][11][12]
Over long distances, the primary method of operating quantum networks is to use optical networks and photon-basedqubits. This is due to optical networks having a reduced chance ofdecoherence. Optical networks have the advantage of being able to re-use existingoptical fiber. Alternately, free space networks can be implemented that transmit quantum information through the atmosphere or through a vacuum.[13]
Optical networks using existingtelecommunication fibercan be implemented using hardware similar to existing telecommunication equipment. This fiber can be either single-mode or multi-mode, with single-mode allowing for more precise communication.[5]At the sender, asingle photonsource can be created by heavily attenuating a standard telecommunication laser such that the mean number ofphotonsper pulse is less than 1. For receiving, anavalanche photodetectorcan be used. Various methods of phase orpolarizationcontrol can be used such asinterferometersandbeam splitters. In the case ofentanglementbased protocols, entangled photons can be generated throughspontaneous parametric down-conversion. In both cases, the telecom fiber can be multiplexed to send non-quantum timing and control signals.
In 2020 a team of researchers affiliated with several institutions in China has succeeded in sending entangled quantum memories over a 50-kilometer coiled fiber cable.[14]
Free space quantum networks operate similar to fiber optic networks but rely on line of sight between the communicating parties instead of using a fiber optic connection. Free space networks can typically support higher transmission rates than fiber optic networks and do not have to account forpolarizationscrambling caused byoptical fiber.[15]However, over long distances, free space communication is subject to an increased chance of environmental disturbance on thephotons.[5]
Free space communication is also possible from a satellite to the ground. A quantum satellite capable ofentanglementdistribution over a distance of 1,203 km[16]has been demonstrated. The experimental exchange of single photons from a global navigation satellite system at a slant distance of 20,000 km has also been reported.[17]These satellites can play an important role in linking smaller ground-based networks over larger distances. In free-space networks, atmospheric conditions such as turbulence, scattering, and absorption present challenges that affect the fidelity of transmitted quantum states. To mitigate these effects, researchers employ adaptive optics, advanced modulation schemes, and error correction techniques.[18]The resilience of QKD protocols against eavesdropping plays a crucial role in ensuring the security of the transmitted data. Specifically, protocols like BB84 and decoy-state schemes have been adapted for free-space environments to improve robustness against potential security vulnerabilities.
Long-distance communication is hindered by the effects of signal loss anddecoherenceinherent to most transport mediums such as optical fiber. In classical communication, amplifiers can be used to boost the signal during transmission, but in a quantum network amplifiers cannot be used sincequbitscannot be copied – known as theno-cloning theorem. That is, to implement an amplifier, the complete state of the flying qubit would need to be determined, something which is both unwanted and impossible.
An intermediary step which allows the testing of communication infrastructure are trusted repeaters. Importantly, a trusted repeater cannot be used to transmitqubitsover long distances. Instead, a trusted repeater can only be used to performquantum key distributionwith the additional assumption that the repeater is trusted. Consider two end nodes A and B, and a trusted repeater R in the middle. A and R now performquantum key distributionto generate a keykAR{\displaystyle k_{AR}}. Similarly, R and B runquantum key distributionto generate a keykRB{\displaystyle k_{RB}}. A and B can now obtain a keykAB{\displaystyle k_{AB}}between themselves as follows: A sendskAB{\displaystyle k_{AB}}to R encrypted with the keykAR{\displaystyle k_{AR}}. R decrypts to obtainkAB{\displaystyle k_{AB}}. R then re-encryptskAB{\displaystyle k_{AB}}using the keykRB{\displaystyle k_{RB}}and sends it to B. B decrypts to obtainkAB{\displaystyle k_{AB}}. A and B now share the keykAB{\displaystyle k_{AB}}. The key is secure from an outside eavesdropper, but clearly the repeater R also knowskAB{\displaystyle k_{AB}}. This means that any subsequent communication between A and B does not provide end to end security, but is only secure as long as A and B trust the repeater R.
A true quantum repeater allows the end to end generation of quantum entanglement, and thus – by usingquantum teleportation– the end to end transmission ofqubits. Inquantum key distributionprotocols one can test for such entanglement. This means that when making encryption keys, the sender and receiver are secure even if they do not trust the quantum repeater. Any other application of a quantum internet also requires the end to end transmission of qubits, and thus a quantum repeater.
Quantum repeaters allow entanglement and can be established at distant nodes without physically sending an entangled qubit the entire distance.[19]
In this case, the quantum network consists of many short distance links of perhaps tens or hundreds of kilometers. In the simplest case of a single repeater, two pairs of entangled qubits are established:|A⟩{\displaystyle |A\rangle }and|Ra⟩{\displaystyle |R_{a}\rangle }located at the sender and the repeater, and a second pair|Rb⟩{\displaystyle |R_{b}\rangle }and|B⟩{\displaystyle |B\rangle }located at the repeater and the receiver. These initial entangled qubits can be easily created, for example throughparametric down conversion, with one qubit physically transmitted to an adjacent node. At this point, the repeater can perform aBell measurementon the qubits|Ra⟩{\displaystyle |R_{a}\rangle }and|Rb⟩{\displaystyle |R_{b}\rangle }thus teleporting the quantum state of|Ra⟩{\displaystyle |R_{a}\rangle }onto|B⟩{\displaystyle |B\rangle }. This has the effect of "swapping" the entanglement such that|A⟩{\displaystyle |A\rangle }and|B⟩{\displaystyle |B\rangle }are now entangled at a distance twice that of the initial entangled pairs. It can be seen that a network of such repeaters can be used linearly or in a hierarchical fashion to establish entanglement over great distances.[20][21]
Hardware platforms suitable as end nodes above can also function as quantum repeaters. However, there are also hardware platforms specific only[22]to the task of acting as a repeater, without the capabilities of performing quantum gates.
Error correction can be used in quantum repeaters. Due to technological limitations, however, the applicability is limited to very short distances as quantum error correction schemes capable of protectingqubitsover long distances would require an extremely large amount of qubits and hence extremely large quantum computers.
Errors in communication can be broadly classified into two types: Loss errors (due tooptical fiber/environment) and operation errors (such asdepolarization, dephasing etc.). While redundancy can be used to detect and correct classical errors, redundant qubits cannot be created due to the no-cloning theorem. As a result, other types of error correction must be introduced such as theShor codeor one of a number of more general and efficient codes. All of these codes work by distributing the quantum information across multiple entangled qubits so that operation errors as well as loss errors can be corrected.[23]
In addition to quantum error correction, classical error correction can be employed by quantum networks in special cases such as quantum key distribution. In these cases, the goal of the quantum communication is to securely transmit a string of classical bits. Traditional error correction codes such asHamming codescan be applied to the bit string before encoding and transmission on the quantum network.
Quantum decoherencecan occur when one qubit from a maximally entangled bell state is transmitted across a quantum network. Entanglement purification allows for the creation of nearly maximally entangled qubits from a large number of arbitrary weakly entangled qubits, and thus provides additional protection against errors. Entanglement purification (also known asEntanglement distillation) has already been demonstrated inNitrogen-vacancy centersin diamond.[24]
A quantum internet supports numerous applications, enabled byquantum entanglement. In general, quantum entanglement is well suited for tasks that require coordination, synchronization or privacy.
Examples of such applications includequantum key distribution,[25][26]clock stabilization,[27]protocols for distributed system problems such as leader election orByzantine agreement,[5]extending the baseline oftelescopes,[28][29]as well as position verification,[30][31]secure identification and two-party cryptography in thenoisy-storage model. A quantum internet also enables secure access to a quantum computer[32]in the cloud. Specifically, a quantum internet enables very simple quantum devices to connect to a remote quantum computer in such a way that computations can be performed there without the quantum computer finding out what this computation actually is (the input and output quantum states can not be measured without destroying the computation, but the circuit composition used for the calculation will be known).
When it comes to communicating in any form the largest issue has always been keeping these communications private.[33]Quantum networks would allow for information to be created, stored and transmitted, potentially achieving "a level of privacy, security and computational clout that is impossible to achieve with today’s Internet."[34]
By applying aquantum operatorthat the user selects to a system of information the information can then be sent to the receiver without a chance of an eavesdropper being able to accurately be able to record the sent information without either the sender or receiver knowing. Unlike classical information that is transmitted in bits and assigned either a 0 or 1 value, the quantum information used in quantum networks uses quantum bits (qubits), which can have both 0 and 1 value at the same time, being in a state ofsuperposition.[34][35]This works because if a listener tries to listen in then they will change the information in an unintended way by listening, thereby tipping their hand to the people on whom they are attacking. Secondly, without the proper quantum operator to decode the information they will corrupt the sent information without being able to use it themselves. Furthermore, qubits can be encoded in a variety of materials, including in the polarization ofphotonsor thespin statesofelectrons.[34]
One example of a prototype quantum communication network is the eight-user city-scale quantum network described in a paper published in September 2020. The network located in Bristol used already deployed fibre-infrastructure and worked without active switching or trusted nodes.[36][37]
In 2022, Researchers at the University of Science and Technology of China and Jinan Institute of Quantum Technology demonstrated quantum entanglement between two memory devices located at 12.5 km apart from each other within an urban environment.[38]
In the same year, Physicist at the Delft University of Technology in Netherlands has taken a significant step toward the network of the future by using a technique called quantum teleportation that sends data to three physical locations which was previously only possible with two locations.[39]
In 2024, researchers in the U.K and Germany achieved a first by producing, storing, and retrieving quantum information. This milestone involved interfacing a quantum dot light source and a quantum memory system, paving the way for practical applications despite challenges like quantum information loss over long distances.[40]
In February 2025, researchers fromOxford Universityexperimentally demonstrated the distribution of quantum computations between two photonically interconnected trapped-ion modules. Each module contained dedicated network and circuit qubits, and they were separated by approximately two meters. The team achieved deterministic teleportation of a controlled-Z gate between two circuit qubits located in separate modules, attaining an 86% fidelity. This experiment also marked the first implementation of a distributed quantum algorithm comprising multiple non-local two-qubit gates, specificallyGrover's search algorithm, which was executed with a 71% success rate. These advancements represented significant progress toward scalable quantum computing and the development of a quantum internet.[41]
In 2021, researchers at the Max Planck Institute of Quantum Optics in Germany reported a first prototype ofquantum logic gatesfor distributed quantum computers.[42][43]
A research team at theMax-Planck-Institute of Quantum Opticsin Garching, Germany is finding success in transporting quantum data from flying and stable qubits via infrared spectrum matching. This requires a sophisticated, super-cooledyttriumsilicate crystal to sandwicherbiumin a mirrored environment to achieve resonance matching of infrared wavelengths found in fiber optic networks. The team successfully demonstrated the device works without data loss.[44]
In 2021, researchers in China reported the successful transmission of entangled photons betweendrones, used as nodes for the development of mobile quantum networks or flexible network extensions. This could be the first work in which entangled particles were sent between two moving devices.[45][46]Also, it has been researched the application of quantum communications to improve6G mobile networksfor joint detection and data transfer withquantum entanglement,[47][48]where there are possible advantages such as security and energy efficiency.[49]
Several test networks have been deployed that are tailored to the task ofquantum key distributioneither at short distances (but connecting many users), or over larger distances by relying on trusted repeaters. These networks do not yet allow for the end to end transmission ofqubitsor the end to end creation of entanglement between far away nodes.
|
https://en.wikipedia.org/wiki/Quantum_network
|
Subitizingis the rapid, accurate, and effortless ability to perceive small quantities of items in aset, typically when there are four or fewer items, without relying on linguistic or arithmetic processes. The term refers to the sensation of instantly knowing how many objects are in the visual scene when their number falls within the subitizing range.[1]
Sets larger than about four to five items cannot be subitized unless the items appear in a pattern with which the person is familiar (such as the six dots on one face of a die). Large, familiar sets might becountedone-by-one (or the person might calculate the number through a rapid calculation if they can mentally group the elements into a few small sets). A person could alsoestimatethe number of a large set—a skill similar to, but different from, subitizing. The term subitizing was coined in 1949 by E. L. Kaufman et al.,[1]and is derived from the Latin adjectivesubitus(meaning "sudden").
The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid,[2]accurate,[3]and confident.[4]However, once there are more than four items to count, judgments are made with decreasing accuracy and confidence.[1]In addition, response times rise in a dramatic fashion, with an extra 250–350ms added for each additional item within the display beyond about four.[5]
While the increase in response time for each additional element within a display is 250–350ms per item outside the subitizing range, there is still a significant, albeit smaller, increase of 40–100ms per item within the subitizing range.[2]A similar pattern of reaction times is found in young children, although with steeper slopes for both the subitizing range and the enumeration range.[6]This suggests there is no span ofapprehensionas such, if this is defined as the number of items which can be immediately apprehended by cognitive processes, since there is an extra cost associated with each additional item enumerated. However, the relative differences in costs associated with enumerating items within the subitizing range are small, whether measured in terms of accuracy, confidence, orspeed of response. Furthermore, the values of all measures appear to differ markedly inside and outside the subitizing range.[1]So, while there may be no span of apprehension, there appear to be real differences in the ways in which a small number of elements is processed by the visual system (i.e. approximately four or fewer items), compared with larger numbers of elements (i.e. approximately more than four items).
A 2006 study demonstrated that subitizing and counting are not restricted to visual perception, but also extend to tactile perception, when observers had to name the number of stimulated fingertips.[7]A 2008 study also demonstrated subitizing and counting in auditory perception.[8]Even though the existence of subitizing in tactile perception has been questioned,[9]this effect has been replicated many times and can be therefore considered as robust.[10][11][12]The subitizing effect has also been obtained in tactile perception with congenitally blind adults.[13]Together, these findings support the idea that subitizing is a general perceptual mechanism extending to auditory and tactile processing.
As the derivation of the term "subitizing" suggests, the feeling associated with making a number judgment within the subitizing range is one of immediately being aware of the displayed elements.[3]When the number of objects presented exceeds the subitizing range, this feeling is lost, and observers commonly report an impression of shifting their viewpoint around the display, until all the elements presented have been counted.[1]The ability of observers to count the number of items within a display can be limited, either by the rapid presentation and subsequent masking of items,[14]or by requiring observers to respond quickly.[1]Both procedures have little, if any, effect on enumeration within the subitizing range. These techniques may restrict the ability of observers to count items by limiting the degree to which observers can shift their "zone of attention"[15]successively to different elements within the display.
Atkinson, Campbell, and Francis[16]demonstrated that visualafterimagescould be employed in order to achieve similar results. Using a flashgun to illuminate a line of white disks, they were able to generate intense afterimages in dark-adapted observers. Observers were required to verbally report how many disks had been presented, both at 10s and at 60s after the flashgun exposure. Observers reported being able to see all the disks presented for at least 10s, and being able to perceive at least some of the disks after 60s. Unlike simply displaying the images for 10 and 60 second intervals, when presented in the form of afterimages, eye movement cannot be employed for the purpose of counting: when the subjects move their eyes, the images also move. Despite a long period of time to enumerate the number of disks presented when the number of disks presented fell outside the subitizing range (i.e., 5–12 disks), observers made consistent enumeration errors in both the 10s and 60s conditions. In contrast, no errors occurred within the subitizing range (i.e., 1–4 disks), in either the 10s or 60s conditions.[17]
The work on theenumerationof afterimages[16][17]supports the view that different cognitive processes operate for the enumeration of elements inside and outside the subitizing range, and as such raises the possibility that subitizing and counting involve different brain circuits. However,functional imagingresearch has been interpreted both to support different[18]and shared processes.[19]
Social theory supporting the view that subitizing and counting may involve functionally and anatomically distinct brain areas comes from patients withsimultanagnosia, one of the key components ofBálint's syndrome.[20]Patients with this disorder suffer from an inability to perceive visual scenes properly, being unable to localize objects in space, either by looking at the objects, pointing to them, or by verbally reporting their position.[20]Despite these dramatic symptoms, such patients are able to correctly recognize individual objects.[21]Crucially, people with simultanagnosia are unable to enumerate objects outside the subitizing range, either failing to count certain objects, or alternatively counting the same object several times.[22]
However, people with simultanagnosia have no difficulty enumerating objects within the subitizing range.[23]The disorder is associated with bilateral damage to theparietal lobe, an area of the brain linked with spatial shifts of attention.[18]These neuropsychological results are consistent with the view that the process of counting, but not that of subitizing, requires active shifts of attention. However, recent research has questioned this conclusion by finding that attention also affects subitizing.[24]
A further source of research on the neural processes of subitizing compared to counting comes frompositron emission tomography(PET) research on normal observers. Such research compares the brain activity associated with enumeration processes inside (i.e., 1–4 items) for subitizing, and outside (i.e., 5–8 items) for counting.[18][19]
Such research finds that within the subitizing and counting range activation occurs bilaterally in the occipital extrastriate cortex and superior parietal lobe/intraparietal sulcus. This has been interpreted as evidence that shared processes are involved.[19]However, the existence of further activations during counting in the right inferior frontal regions, and theanterior cingulatehave been interpreted as suggesting the existence of distinct processes during counting related to the activation of regions involved in the shifting of attention.[18]
Historically, many systems have attempted to use subitizing to identify full or partial quantities. In the twentieth century, mathematics educators started to adopt some of these systems, as reviewed in the examples below, but often switched to more abstract color-coding to represent quantities up to ten.
In the 1990s, babies three weeks old were shown to differentiate between 1–3 objects, that is, to subitize.[22]A more recent meta-study summarizing five different studies concluded that infants are born with an innate ability to differentiate quantities within a small range, which increases over time.[25]By the age of seven that ability increases to 4–7 objects. Some practitioners claim that with training, children are capable of subitizing 15+ objects correctly.[citation needed]
The hypothesized use ofyupana, an Inca counting system, placed up to five counters in connected trays for calculations.
In each place value, the Chineseabacususes four or five beads to represent units, which are subitized, and one or two separate beads, which symbolize fives. This allows multi-digit operations such as carrying and borrowing to occur without subitizing beyond five.
European abacuses use ten beads in each register, but usually separate them into fives by color.
The idea of instant recognition of quantities has been adopted by several pedagogical systems, such asMontessori,CuisenaireandDienes. However, these systems only partially use subitizing, attempting to make all quantities from 1 to 10 instantly recognizable. To achieve it, they code quantities by color and length of rods or bead strings representing them. Recognizing such visual or tactile representations and associating quantities with them involves different mental operations from subitizing.
One of the most basic applications is indigit groupingin large numbers, which allow one to tell the size at a glance, rather than having to count. For example, writing one million (1000000) as 1,000,000 (or 1.000.000 or1000000) or one (short) billion (1000000000) as 1,000,000,000 (or other forms, such as 1,00,00,00,000 in theIndian numbering system) makes it much easier to read. This is particularly important in accounting and finance, as an error of a single decimal digit changes the amount by a factor of ten. This is also found in computerprogramming languagesforliteralvalues, some of which usedigit separators.
Dice,playing cardsand other gaming devices traditionally split quantities into subitizable groups with recognizable patterns. The behavioural advantage of this grouping method has been scientifically investigated by Ciccione andDehaene,[26]who showed that counting performances are improved if the groups share the same amount of items and the same repeated pattern.
A comparable application is to split up binary and hexadecimal number representations, telephone numbers, bank account numbers (e.g.,IBAN, social security numbers, number plates, etc.) into groups ranging from 2 to 5 digits separated by spaces, dots, dashes, or other separators. This is done to support overseeing completeness of a number when comparing or retyping. This practice of grouping characters also supports easier memorization of large numbers and character structures.
There is at least one game that can be played online to self assess one's ability to subitize.[27]
|
https://en.wikipedia.org/wiki/Subitizing_and_counting
|
In mathematics, adyadic rationalorbinary rationalis a number that can be expressed as afractionwhosedenominatoris apower of two. For example, 1/2, 3/2, and 3/8 are dyadic rationals, but 1/3 is not. These numbers are important incomputer sciencebecause they are the only ones with finitebinary representations. Dyadic rationals also have applications in weights and measures, musicaltime signatures, and early mathematics education. They can accurately approximate anyreal number.
The sum, difference, or product of any two dyadic rational numbers is another dyadic rational number, given by a simple formula. However, division of one dyadic rational number by another does not always produce a dyadic rational result. Mathematically, this means that the dyadic rational numbers form aring, lying between the ring ofintegersand thefieldofrational numbers. This ring may be denotedZ[12]{\displaystyle \mathbb {Z} [{\tfrac {1}{2}}]}.
In advanced mathematics, the dyadic rational numbers are central to the constructions of thedyadic solenoid,Minkowski's question-mark function,Daubechies wavelets,Thompson's group,Prüfer 2-group,surreal numbers, andfusible numbers. These numbers areorder-isomorphicto the rational numbers; they form a subsystem of the2-adic numbersas well as of the reals, and can represent thefractional partsof 2-adic numbers. Functions from natural numbers to dyadic rationals have been used to formalizemathematical analysisinreverse mathematics.
Many traditional systems of weights and measures are based on the idea of repeated halving, which produces dyadic rationals when measuring fractional amounts of units. Theinchis customarily subdivided in dyadic rationals rather than using a decimal subdivision.[1]The customary divisions of thegalloninto half-gallons,quarts,pints, andcupsare also dyadic.[2]The ancient Egyptians used dyadic rationals in measurement, with denominators up to 64.[3]Similarly, systems of weights from theIndus Valley civilisationare for the most part based on repeated halving; anthropologist Heather M.-L. Miller writes that "halving is a relatively simple operation with beam balances, which is likely why so many weight systems of this time period used binary systems".[4]
Dyadic rationals are central tocomputer scienceas a type of fractional number that many computers can manipulate directly.[5]In particular, as a data type used by computers,floating-point numbersare often defined as integers multiplied by positive or negative powers of two. The numbers that can be represented precisely in a floating-point format, such as theIEEE floating-point datatypes, are called its representable numbers. For most floating-point representations, the representable numbers are a subset of the dyadic rationals.[6]The same is true forfixed-point datatypes, which also use powers of two implicitly in the majority of cases.[7]Because of the simplicity of computing with dyadic rationals, they are also used for exact real computing usinginterval arithmetic,[8]and are central to some theoretical models ofcomputable numbers.[9][10][11]
Generating arandom variablefrom random bits, in a fixed amount of time, is possible only when the variable has finitely many outcomes whose probabilities are all dyadic rational numbers. For random variables whose probabilities are not dyadic, it is necessary either to approximate their probabilities by dyadic rationals, or to use a random generation process whose time is itself random and unbounded.[12]
Time signaturesin Westernmusical notationtraditionally are written in a form resembling fractions (for example:22,44, or68),[13]although the horizontal line of the musical staff that separates the top and bottom number is usually omitted when writing the signature separately from its staff. As fractions they are generally dyadic,[14]althoughnon-dyadic time signatureshave also been used.[15]The numeric value of the signature, interpreted as a fraction, describes the length of a measure as a fraction of awhole note. Its numerator describes the number of beats per measure, and the denominator describes the length of each beat.[13][14]
In theories of childhood development of the concept of a fraction based on the work ofJean Piaget, fractional numbers arising from halving and repeated halving are among the earliest forms of fractions to develop.[16]This stage of development of the concept of fractions has been called "algorithmic halving".[17]Addition and subtraction of these numbers can be performed in steps that only involve doubling, halving, adding, and subtracting integers. In contrast, addition and subtraction of more general fractions involves integer multiplication and factorization to reach a common denominator. Therefore, dyadic fractions can be easier for students to calculate with than more general fractions.[18]
The dyadic numbers are therational numbersthat result from dividing anintegerby apower of two.[9]A rational numberp/q{\displaystyle p/q}in simplest terms is a dyadic rational whenq{\displaystyle q}is a power of two.[19]Another equivalent way of defining the dyadic rationals is that they are thereal numbersthat have a terminatingbinary representation.[9]
Addition,subtraction, andmultiplicationof any two dyadic rationals produces another dyadic rational, according to the following formulas:[20]
However, the result ofdividingone dyadic rational by another is not necessarily a dyadic rational.[21]For instance, 1 and 3 are both dyadic rational numbers, but 1/3 is not.
Every integer, and everyhalf-integer, is a dyadic rational.[22]They both meet the definition of being an integer divided by a power of two: every integer is an integer divided by one (the zeroth power of two), and every half-integer is an integer divided by two.
Everyreal numbercan be arbitrarily closely approximated by dyadic rationals. In particular, for a real numberx{\displaystyle x}, consider the dyadic rationals of the form⌊2ix⌋/2i{\textstyle \lfloor 2^{i}x\rfloor /2^{i}},wherei{\displaystyle i}can be any integer and⌊…⌋{\displaystyle \lfloor \dots \rfloor }denotes thefloor functionthat rounds its argument down to an integer. These numbers approximatex{\displaystyle x}from below to within an error of1/2i{\displaystyle 1/2^{i}}, which can be made arbitrarily small by choosingi{\displaystyle i}to be arbitrarily large. For afractalsubset of the real numbers, this error bound is within a constant factor of optimal: for these numbers, there is no approximationn/2i{\displaystyle n/2^{i}}with error smaller than a constant times1/2i{\displaystyle 1/2^{i}}.[23][24]The existence of accurate dyadic approximations can be expressed by saying that the set of all dyadic rationals isdensein thereal line.[22]More strongly, this set is uniformly dense, in the sense that the dyadic rationals with denominator2i{\displaystyle 2^{i}}are uniformly spaced on the real line.[9]
The dyadic rationals are precisely those numbers possessing finitebinary expansions.[9]Their binary expansions are not unique; there is one finite and one infinite representation of each dyadic rational other than 0 (ignoring terminal 0s). For example, 0.112= 0.10111...2, giving two different representations for 3/4.[9][25]The dyadic rationals are the only numbers whose binary expansions are not unique.[9]
Because they are closed under addition, subtraction, and multiplication, but not division, the dyadic rationals are aringbut not afield.[26]The ring of dyadic rationals may be denotedZ[12]{\displaystyle \mathbb {Z} [{\tfrac {1}{2}}]}, meaning that it can be generated by evaluating polynomials with integer coefficients, at the argument 1/2.[27]As a ring, the dyadic rationals are asubringof the rational numbers, and anoverringof the integers.[28]Algebraically, this ring is thelocalizationof the integers with respect to the set ofpowers of two.[29]
As well as forming a subring of thereal numbers, the dyadic rational numbers form a subring of the2-adic numbers, a system of numbers that can be defined from binary representations that are finite to the right of the binary point but may extend infinitely far to the left. The 2-adic numbers include all rational numbers, not just the dyadic rationals. Embedding the dyadic rationals into the 2-adic numbers does not change the arithmetic of the dyadic rationals, but it gives them a different topological structure than they have as a subring of the real numbers. As they do in the reals, the dyadic rationals form a dense subset of the 2-adic numbers,[30]and are the set of 2-adic numbers with finite binary expansions. Every 2-adic number can be decomposed into the sum of a 2-adic integer and a dyadic rational; in this sense, the dyadic rationals can represent thefractional partsof 2-adic numbers, but this decomposition is not unique.[31]
Addition of dyadic rationals modulo 1 (thequotient groupZ[12]/Z{\displaystyle \mathbb {Z} [{\tfrac {1}{2}}]/\mathbb {Z} }of the dyadic rationals by the integers) forms thePrüfer 2-group.[32]
Considering only the addition and subtraction operations of the dyadic rationals gives them the structure of an additiveabelian group.Pontryagin dualityis a method for understanding abelian groups by constructing dual groups, whose elements arecharactersof the original group,group homomorphismsto the multiplicative group of thecomplex numbers, with pointwise multiplication as the dual group operation. The dual group of the additive dyadic rationals, constructed in this way, can also be viewed as atopological group. It is called the dyadic solenoid, and is isomorphic to the topological product of the real numbers and 2-adic numbers,quotientedby thediagonal embeddingof the dyadic rationals into this product.[30]It is an example of aprotorus, asolenoid, and anindecomposable continuum.[33]
Because they are a dense subset of the real numbers, the dyadic rationals, with their numeric ordering, form adense order. As with any two unbounded countable dense linear orders, byCantor's isomorphism theorem,[34]the dyadic rationals areorder-isomorphicto the rational numbers. In this case,Minkowski's question-mark functionprovides an order-preservingbijectionbetween the set of all rational numbers and the set of dyadic rationals.[35]
The dyadic rationals play a key role in the analysis ofDaubechies wavelets, as the set of points where thescaling functionof these wavelets is non-smooth.[26]Similarly, the dyadic rationals parameterize the discontinuities in the boundary between stable and unstable points in the parameter space of theHénon map.[36]
The set ofpiecewise linearhomeomorphismsfrom theunit intervalto itself that have power-of-2 slopes and dyadic-rational breakpoints forms a group under the operation offunction composition. This isThompson's group, the first known example of an infinite butfinitely presentedsimple group.[37]The same group can also be represented by an action on rooted binary trees,[38]or by an action on the dyadic rationals within the unit interval.[32]
Inreverse mathematics, one way of constructing thereal numbersis to represent them as functions fromunary numbersto dyadic rationals, where the value of one of these functions for the argumenti{\displaystyle i}is a dyadic rational with denominator2i{\displaystyle 2^{i}}that approximates the given real number. Defining real numbers in this way allows many of the basic results ofmathematical analysisto be proven within a restricted theory ofsecond-order arithmeticcalled "feasible analysis" (BTFA).[39]
Thesurreal numbersare generated by an iterated construction principle which starts by generating all finite dyadic rationals, and then goes on to create new and strange kinds of infinite, infinitesimal and other numbers.[40]This number system is foundational tocombinatorial game theory, and dyadic rationals arise naturally in this theory as the set of values of certain combinatorial games.[41][42][19]
Thefusible numbersare a subset of the dyadic rationals, the closure of the set{0}{\displaystyle \{0\}}under the operationx,y↦(x+y+1)/2{\displaystyle x,y\mapsto (x+y+1)/2}, restricted to pairsx,y{\displaystyle x,y}with|x−y|<1{\displaystyle |x-y|<1}. They arewell-ordered, withorder typeequal to theepsilon numberε0{\displaystyle \varepsilon _{0}}. For each integern{\displaystyle n}the smallest fusible number that is greater thann{\displaystyle n}has the formn+1/2k{\displaystyle n+1/2^{k}}. The existence ofk{\displaystyle k}for eachn{\displaystyle n}cannot be proven inPeano arithmetic,[43]andk{\displaystyle k}grows so rapidly as a function ofn{\displaystyle n}that forn=3{\displaystyle n=3}it is (inKnuth's up-arrow notationfor large numbers) already larger than2↑916{\displaystyle 2\uparrow ^{9}16}.[44]
The usual proof ofUrysohn's lemmautilizes the dyadic fractions for constructing the separating function from the lemma.
Ring homomorphisms
Algebraic structures
Related structures
Algebraic number theory
Noncommutative algebraic geometry
Free algebra
Clifford algebra
|
https://en.wikipedia.org/wiki/Dyadic_rational
|
Top-Down Parsing Language(TDPL) is a type ofanalyticformal grammardeveloped byAlexander Birmanin the early 1970s[1][2][3]in order to study formally the behavior of a common class of practicaltop-down parsersthat support a limited form ofbacktracking. Birman originally named his formalismthe TMG Schema(TS), afterTMG, an earlyparser generator, but it was later given the name TDPL byAhoandUllmanin their classic anthologyThe Theory of Parsing, Translation and Compiling.[4]
Formally, aTDPL grammarGis a quadruple consisting of the following components:
A TDPL grammar can be viewed as an extremely minimalistic formal representation of arecursive descent parser, in which each of the nonterminals schematically represents a parsingfunction. Each of these nonterminal-functions takes as its input argument a string to be recognized, and yields one of two possible outcomes:
Note that a nonterminal-function may succeed without actually consuming any input, and this is considered an outcome distinct from failure.
A nonterminalAdefined by a rule of the formA→ ε always succeeds without consuming any input, regardless of the input string provided. Conversely, a rule of the formA→falways fails regardless of input. A rule of the formA→asucceeds if the next character in the input string is the terminala, in which case the nonterminal succeeds and consumes that one terminal; if the next input character does not match (or there is no next character), then the nonterminal fails.
A nonterminalAdefined by a rule of the formA→BC/Dfirstrecursivelyinvokes nonterminalB, and ifBsucceeds, invokesCon the remainder of the input string left unconsumed byB. If bothBandCsucceed, thenAin turn succeeds and consumes the same total number of input characters thatBandCtogether did. If eitherBorCfails, however, thenAbacktracksto the original point in the input string where it was first invoked, and then invokesDon that original input string, returning whatever resultDproduces.
The following TDPL grammar describes theregular languageconsisting of an arbitrary-length sequence of a's and b's:
The following grammar describes thecontext-freeDyck languageconsisting of arbitrary-length strings of matched braces, such as '{}', '{{}{{}}}', etc.:
The above examples can be represented equivalently but much more succinctly inparsing expression grammarnotation asS←(a/b)*andS←({S})*, respectively.
A slight variation of TDPL, known asGeneralized TDPLor GTDPL, greatly increases the apparent expressiveness of TDPL while retaining the same minimalist approach (though they are actually equivalent). In GTDPL, instead of TDPL's recursive rule formA→BC/D, the rule formA→B[C,D]is used. This rule is interpreted as follows: When nonterminalAis invoked on some input string, it first recursively invokesB. IfBsucceeds, thenAsubsequently invokesCon the remainder of the input left unconsumed byB, and returns the result ofCto the original caller. IfBfails, on the other hand, thenAinvokesDon the original input string, and passes the result back to the caller.
The important difference between this rule form and theA→BC/Drule form used in TDPL is thatCandDare neverbothinvoked in the same call toA: that is, the GTDPL rule acts more like a "pure" if/then/else construct usingBas the condition.
In GTDPL it is straightforward to express interesting non-context-free languagessuch as the classic example {anbncn}.
A GTDPL grammar can be reduced to an equivalent TDPL grammar that recognizes the same language, although the process is not straightforward and may greatly increase the number of rules required.[5]Also, both TDPL and GTDPL can be viewed as very restricted forms ofparsing expression grammars, all of which represent the same class of grammars.[5]
|
https://en.wikipedia.org/wiki/Top-down_parsing_language
|
2AS5,2A07
93986
114142
ENSG00000128573
ENSMUSG00000029563
O15409Q75MZ5Q0PRL4Q8N6B6
P58463
NM_148900
NM_053242NM_212435NM_001286607
NP_683698NP_001166237.1NP_683697.2
NP_001273536NP_444472NP_997600
Forkhead box protein P2(FOXP2) is aproteinthat, in humans, is encoded by theFOXP2gene. FOXP2 is a member of theforkhead boxfamily oftranscription factors, proteins thatregulate gene expressionbybinding to DNA. It is expressed in the brain, heart, lungs and digestive system.[5][6]
FOXP2is found in manyvertebrates, where it plays an important role in mimicry in birds (such asbirdsong) andecholocationin bats.FOXP2is also required for the proper development of speech and language in humans.[7]In humans, mutations inFOXP2cause the severe speech and language disorderdevelopmental verbal dyspraxia.[7][8]Studies of the gene in mice and songbirds indicate that it is necessary for vocal imitation and the related motor learning.[9][10][11]Outside the brain,FOXP2has also been implicated in development of other tissues such as the lung and digestive system.[12]
Initially identified in 1998 as the genetic cause of aspeech disorderin a British family designated theKE family,FOXP2was the first gene discovered to be associated with speech and language[13]and was subsequently dubbed "the language gene".[14]However, other genes are necessary for human language development, and a 2018 analysis confirmed that there was no evidence of recent positiveevolutionary selectionofFOXP2in humans.[15][16]
As aFOX protein, FOXP2 contains a forkhead-box domain. In addition, it contains apolyglutamine tract, azinc fingerand aleucine zipper. The protein attaches to the DNA of other proteins and controls their activity through the forkhead-box domain. Only a few targeted genes have been identified, however researchers believe that there could be up to hundreds of other genes targeted by the FOXP2 gene. The forkhead box P2 protein is active in the brain and other tissues before and after birth, and many studies show that it is paramount for the growth of nerve cells and transmission between them. The FOXP2 gene is also involved in synaptic plasticity, making it imperative for learning and memory.[17]
FOXP2is required for proper brain and lung development.Knockout micewith only one functional copy of theFOXP2gene have significantly reduced vocalizations as pups.[18]Knockout mice with no functional copies ofFOXP2are runted, display abnormalities in brain regions such as thePurkinje layer, and die an average of 21 days after birth from inadequate lung development.[12]
FOXP2is expressed in many areas of the brain,[19]including thebasal gangliaand inferiorfrontal cortex, where it is essential for brain maturation and speech and language development.[20]In mice, the gene was found to be twice as highly expressed in male pups than female pups, which correlated with an almost double increase in the number of vocalisations the male pups made when separated from mothers. Conversely, in human children aged 4–5, the gene was found to be 30% more expressed in theBroca's areasof female children. The researchers suggested that the gene is more active in "the more communicative sex".[21][22]
The expression ofFOXP2is subject topost-transcriptional regulation, particularlymicroRNA(miRNA), causing the repression of the FOXP23' untranslated region.[23]
Three amino acid substitutions distinguish the humanFOXP2protein from that found in mice, while two amino acid substitutions distinguish the humanFOXP2protein from that found in chimpanzees,[19]but only one of these changes is unique to humans.[12]Evidence from genetically manipulated mice[24]and human neuronal cell models[25]suggests that these changes affect the neural functions ofFOXP2.
The FOXP2 gene has been implicated in several cognitive functions including; general brain development, language, and synaptic plasticity. The FOXP2 gene region acts as a transcription factor for the forkhead box P2 protein. Transcription factors affect other regions, and the forkhead box P2 protein has been suggested to also act as a transcription factor for hundreds of genes. This prolific involvement opens the possibility that the FOXP2 gene is much more extensive than originally thought.[17]Other targets of transcription have been researched without correlation to FOXP2. Specifically, FOXP2 has been investigated in correlation with autism and dyslexia, however with no mutation was discovered as the cause.[26][8]One well identified target is language.[27]Although some research disagrees with this correlation,[28]the majority of research shows that a mutated FOXP2 causes the observed production deficiency.[17][27][29][26][30][31]
There is some evidence that the linguistic impairments associated with a mutation of theFOXP2gene are not simply the result of a fundamental deficit in motor control. Brain imaging of affected individuals indicates functional abnormalities in language-related cortical and basal ganglia regions, demonstrating that the problems extend beyond the motor system.[32]
Mutations in FOXP2 are among several (26 genes plus 2 intergenic) loci which correlate toADHDdiagnosis in adults – clinical ADHD is an umbrella label for a heterogeneous group of genetic and neurological phenomena which may result from FOXP2 mutations or other causes.[33]
A 2020genome-wide association study(GWAS) implicatessingle-nucleotide polymorphisms(SNPs) of FOXP2 in susceptibility tocannabis use disorder.[34]
It is theorized that the translocation of the 7q31.2 region of the FOXP2 gene causes a severe language impairment calleddevelopmental verbal dyspraxia(DVD)[27]or childhood apraxia of speech (CAS)[35]So far this type of mutation has only been discovered in three families across the world including the original KE family.[31]A missense mutation causing an arginine-to-histidine substitution (R553H) in the DNA-binding domain is thought to be the abnormality in KE.[36]This would cause a normally basic residue to be fairly acidic and highly reactive at the body's pH. A heterozygous nonsense mutation, R328X variant, produces a truncated protein involved in speech and language difficulties in one KE individual and two of their close family members. R553H and R328X mutations also affected nuclear localization, DNA-binding, and the transactivation (increased gene expression) properties of FOXP2.[8]
These individuals present with deletions, translocations, and missense mutations. When tasked with repetition and verb generation, these individuals with DVD/CAS had decreased activation in the putamen and Broca's area in fMRI studies. These areas are commonly known as areas of language function.[37]This is one of the primary reasons that FOXP2 is known as a language gene. They have delayed onset of speech, difficulty with articulation including slurred speech, stuttering, and poor pronunciation, as well as dyspraxia.[31]It is believed that a major part of this speech deficit comes from an inability to coordinate the movements necessary to produce normal speech including mouth and tongue shaping.[27]Additionally, there are more general impairments with the processing of the grammatical and linguistic aspects of speech.[8]These findings suggest that the effects of FOXP2 are not limited to motor control, as they include comprehension among other cognitive language functions. General mild motor and cognitive deficits are noted across the board.[29]Clinically these patients can also have difficulty coughing, sneezing, or clearing their throats.[27]
While FOXP2 has been proposed to play a critical role in the development of speech and language, this view has been challenged by the fact that the gene is also expressed in other mammals as well as birds and fish that do not speak.[38]It has also been proposed that the FOXP2 transcription-factor is not so much a hypothetical 'language gene' but rather part of a regulatory machinery related to externalization of speech.[39]
TheFOXP2gene is highly conserved inmammals.[19]The human gene differs from that innon-human primatesby the substitution of two amino acids, athreoninetoasparaginesubstitution at position 303 (T303N) and an asparagine toserinesubstitution at position 325 (N325S).[36]In mice it differs from that of humans by three substitutions, and inzebra finchby seven amino acids.[19][40][41]One of the two amino acid differences between human and chimps also arose independently in carnivores and bats.[12][42]SimilarFOXP2proteins can be found insongbirds,fish, andreptilessuch asalligators.[43][44]
DNA sampling fromHomo neanderthalensisbones indicates that theirFOXP2gene is a little different though largely similar to those ofHomo sapiens(i.e. humans).[45][46]Previous genetic analysis had suggested that theH. sapiensFOXP2 gene became fixed in the population around 125,000 years ago.[47]Some researchers consider the Neanderthal findings to indicate that the gene instead swept through the population over 260,000 years ago, before our most recent common ancestor with the Neanderthals.[47]Other researchers offer alternative explanations for how theH. sapiensversion would have appeared in Neanderthals living 43,000 years ago.[47]
According to a 2002 study, theFOXP2gene showed indications of recentpositive selection.[19][48]Some researchers have speculated that positive selection is crucial for theevolution of language in humans.[19]Others, however, were unable to find a clear association between species with learned vocalizations and similar mutations inFOXP2.[43][44]A 2018 analysis of a large sample of globally distributed genomes confirmed there was no evidence of positive selection, suggesting that the original signal of positive selection may be driven by sample composition.[15][16]Insertion of both humanmutationsinto mice, whose version ofFOXP2otherwise differs from the human andchimpanzeeversions in only one additional base pair, causes changes in vocalizations as well as other behavioral changes, such as a reduction in exploratory tendencies, and a decrease in maze learning time. A reduction in dopamine levels and changes in the morphology of certain nerve cells are also observed.[24]
FOXP2 is known to regulateCNTNAP2,CTBP1,[49]SRPX2andSCN3A.[50][20][51]
FOXP2 downregulatesCNTNAP2, a member of theneurexinfamily found in neurons.CNTNAP2is associated with common forms of language impairment.[52]
FOXP2 also downregulatesSRPX2, the 'Sushi Repeat-containing Protein X-linked 2'.[53][54]It directly reduces its expression, by binding to its gene'spromoter. SRPX2 is involved inglutamatergicsynapse formationin thecerebral cortexand is more highly expressed in childhood. SRPX2 appears to specifically increase the number of glutamatergic synapses in the brain, while leaving inhibitoryGABAergicsynapses unchanged and not affectingdendritic spinelength or shape. On the other hand, FOXP2's activity does reduce dendritic spine length and shape, in addition to number, indicating it has other regulatory roles in dendritic morphology.[53]
In chimpanzees, FOXP2 differs from the human version by two amino acids.[55]A study in Germany sequenced FOXP2's complementary DNA in chimps and other species to compare it with human complementary DNA in order to find the specific changes in the sequence.[19]FOXP2 was found to be functionally different in humans compared to chimps. Since FOXP2 was also found to have an effect on other genes, its effects on other genes is also being studied.[56]Researchers deduced that there could also be further clinical applications in the direction of these studies in regards to illnesses that show effects on human language ability.[25]
In a mouseFOXP2gene knockouts, loss of both copies of the gene causes severe motor impairment related to cerebellar abnormalities and lack ofultrasonicvocalisationsnormally elicited when pups are removed from their mothers.[18]These vocalizations have important communicative roles in mother–offspring interactions. Loss of one copy was associated with impairment of ultrasonic vocalisations and a modest developmental delay. Male mice on encountering female mice produce complex ultrasonic vocalisations that have characteristics of song.[57]Mice that have the R552H point mutation carried by the KE family show cerebellar reduction and abnormalsynaptic plasticityin striatal andcerebellarcircuits.[9]
Humanized FOXP2 mice display alteredcortico-basal gangliacircuits. The human allele of the FOXP2 gene was transferred into the mouse embryos throughhomologous recombinationto create humanized FOXP2 mice. The human variant of FOXP2 also had an effect on the exploratory behavior of the mice. In comparison to knockout mice with one non-functional copy ofFOXP2, the humanized mouse model showed opposite effects when testing its effect on the levels of dopamine, plasticity of synapses, patterns of expression in the striatum and behavior that was exploratory in nature.[24]
When FOXP2 expression was altered in mice, it affected many different processes including the learning motor skills and the plasticity of synapses. Additionally, FOXP2 is found more in thesixth layerof the cortex than in thefifth, and this is consistent with it having greater roles in sensory integration. FOXP2 was also found in themedial geniculate nucleusof the mouse brain, which is the processing area that auditory inputs must go through in the thalamus. It was found that its mutations play a role in delaying the development of language learning. It was also found to be highly expressed in the Purkinje cells and cerebellar nuclei of the cortico-cerebellar circuits. High FOXP2 expression has also been shown in the spiny neurons that expresstype 1 dopamine receptorsin the striatum,substantia nigra,subthalamic nucleusandventral tegmental area. The negative effects of the mutations of FOXP2 in these brain regions on motor abilities were shown in mice through tasks in lab studies. When analyzing the brain circuitry in these cases, scientists found greater levels of dopamine and decreased lengths of dendrites, which caused defects inlong-term depression, which is implicated in motor function learning and maintenance. ThroughEEGstudies, it was also found that these mice had increased levels of activity in their striatum, which contributed to these results. There is further evidence for mutations of targets of the FOXP2 gene shown to have roles inschizophrenia,epilepsy,autism,bipolar disorderand intellectual disabilities.[58]
FOXP2has implications in the development ofbatecholocation.[36][42][59]Contrary to apes and mice,FOXP2is extremely diverse inecholocating bats.[42]Twenty-two sequences of non-bateutherianmammals revealed a total number of 20 nonsynonymous mutations in contrast to half that number of bat sequences, which showed 44 nonsynonymous mutations.[42]Allcetaceansshare three amino acid substitutions, but no differences were found between echolocatingtoothed whalesand non-echolocatingbaleen cetaceans.[42]Within bats, however, amino acid variation correlated with different echolocating types.[42]
Insongbirds,FOXP2most likely regulates genes involved inneuroplasticity.[10][60]Gene knockdownofFOXP2in area X of thebasal gangliain songbirds results in incomplete and inaccurate song imitation.[10]Overexpression ofFOXP2was accomplished through injection ofadeno-associated virusserotype 1 (AAV1) into area X of the brain. This overexpression produced similar effects to that of knockdown; juvenile zebra finch birds were unable to accurately imitate their tutors.[61]Similarly, in adult canaries, higherFOXP2levels also correlate with song changes.[41]
Levels ofFOXP2in adult zebra finches are significantly higher when males direct their song to females than when they sing song in other contexts.[60]"Directed" singing refers to when a male is singing to a female usually for a courtship display. "Undirected" singing occurs when for example, a male sings when other males are present or is alone.[62]Studies have found that FoxP2 levels vary depending on the social context. When the birds were singing undirected song, there was a decrease of FoxP2 expression in Area X. This downregulation was not observed and FoxP2 levels remained stable in birds singing directed song.[60]
Differences between song-learning and non-song-learning birds have been shown to be caused by differences inFOXP2gene expression, rather than differences in the amino acid sequence of theFOXP2protein.
Inzebrafish, FOXP2 is expressed in the ventral anddorsal thalamus,telencephalon,diencephalonwhere it likely plays a role in nervous system development. The zebrafish FOXP2 gene has an 85% similarity to the human FOX2P ortholog.[63]
FOXP2and its gene were discovered as a result of investigations on an English family known as theKE family, half of whom (15 individuals across three generations) had a speech and language disorder calleddevelopmental verbal dyspraxia. Their case was studied at theInstitute of Child Health of University College London.[64]In 1990,Myrna Gopnik, Professor of Linguistics atMcGill University, reported that the disorder-affected KE family had severe speech impediment with incomprehensible talk, largely characterized by grammatical deficits.[65]She hypothesized that the basis was not of learning or cognitive disability, but due to genetic factors affecting mainly grammatical ability.[66](Her hypothesis led to a popularised existence of "grammar gene" and a controversial notion of grammar-specific disorder.[67][68]) In 1995, theUniversity of Oxfordand the Institute of Child Health researchers found that the disorder was purely genetic.[69]Remarkably, the inheritance of the disorder from one generation to the next was consistent withautosomal dominantinheritance, i.e., mutation of only a single gene on anautosome(non-sex chromosome) acting in a dominant fashion. This is one of the few known examples ofMendelian(monogenic) inheritance for a disorder affecting speech and language skills, which typically have a complex basis involving multiple genetic risk factors.[70]
In 1998, Oxford University geneticistsSimon Fisher,Anthony Monaco, Cecilia S. L. Lai, Jane A. Hurst, andFaraneh Vargha-Khademidentified an autosomal dominant monogenic inheritance that is localized on a small region ofchromosome 7from DNA samples taken from the affected and unaffected members.[5]The chromosomal region (locus) contained 70 genes.[71]The locus was given the official name "SPCH1" (for speech-and-language-disorder-1) by the Human Genome Nomenclature committee. Mapping and sequencing of the chromosomal region was performed with the aid ofbacterial artificial chromosomeclones.[6]Around this time, the researchers identified an individual who was unrelated to the KE family but had a similar type of speech and language disorder. In this case, the child, known as CS, carried a chromosomal rearrangement (atranslocation) in which part of chromosome 7 had become exchanged with part of chromosome 5. The site of breakage of chromosome 7 was located within the SPCH1 region.[6]
In 2001, the team identified in CS that the mutation is in the middle of a protein-coding gene.[7]Using a combination ofbioinformaticsandRNAanalyses, they discovered that the gene codes for a novel protein belonging to theforkhead-box(FOX) group oftranscription factors. As such, it was assigned with the official name of FOXP2. When the researchers sequenced theFOXP2gene in the KE family, they found aheterozygouspoint mutationshared by all the affected individuals, but not in unaffected members of the family and other people.[7]This mutation is due to an amino-acid substitution that inhibits the DNA-binding domain of theFOXP2protein.[72]Further screening of the gene identified multiple additional cases ofFOXP2disruption, including different point mutations[8]and chromosomal rearrangements,[73]providing evidence that damage to one copy of this gene is sufficient to derail speech and language development.
|
https://en.wikipedia.org/wiki/FOXP2
|
Inmathematics, adualitytranslates concepts,theoremsormathematical structuresinto other concepts, theorems or structures in aone-to-onefashion, often (but not always) by means of aninvolutionoperation: if the dual ofAisB, then the dual ofBisA. In other cases the dual of the dual – the double dual or bidual – is not necessarily identical to the original (also calledprimal). Such involutions sometimes havefixed points, so that the dual ofAisAitself. For example,Desargues' theoremisself-dualin this sense under thestandarddualityinprojective geometry.
In mathematical contexts,dualityhas numerous meanings.[1]It has been described as "a very pervasive and important concept in (modern) mathematics"[2]and "an important general theme that has manifestations in almost every area of mathematics".[3]
Many mathematical dualities between objects of two types correspond topairings,bilinear functionsfrom an object of one type and another object of the second type to some family of scalars. For instance,linear algebra dualitycorresponds in this way to bilinear maps from pairs of vector spaces to scalars, theduality betweendistributionsand the associatedtest functionscorresponds to the pairing in which one integrates a distribution against a test function, andPoincaré dualitycorresponds similarly tointersection number, viewed as a pairing between submanifolds of a given manifold.[4]
From acategory theoryviewpoint, duality can also be seen as afunctor, at least in the realm of vector spaces. This functor assigns to each space its dual space, and thepullbackconstruction assigns to each arrowf:V→Wits dualf∗:W∗→V∗.
In the words ofMichael Atiyah,
Duality in mathematics is not a theorem, but a "principle".[5]
The following list of examples shows the common features of many dualities, but also indicates that the precise meaning of duality may vary from case to case.
A simple duality arises from consideringsubsetsof a fixed setS. To any subsetA⊆S, thecomplementA∁[6]consists of all those elements inSthat are not contained inA. It is again a subset ofS. Taking the complement has the following properties:
This duality appears intopologyas a duality betweenopenandclosed subsetsof some fixed topological spaceX: a subsetUofXis closed if and only if its complement inXis open. Because of this, many theorems about closed sets are dual to theorems about open sets. For example, any union of open sets is open, so dually, any intersection of closed sets is closed.[7]Theinteriorof a set is the largest open set contained in it, and theclosureof the set is the smallest closed set that contains it. Because of the duality, the complement of the interior of any setUis equal to the closure of the complement ofU.
A duality ingeometryis provided by thedual coneconstruction. Given a setC{\displaystyle C}of points in the planeR2{\displaystyle \mathbb {R} ^{2}}(or more generally points inRn{\displaystyle \mathbb {R} ^{n}}),the dual cone is defined as the setC∗⊆R2{\displaystyle C^{*}\subseteq \mathbb {R} ^{2}}consisting of those points(x1,x2){\displaystyle (x_{1},x_{2})}satisfyingx1c1+x2c2≥0{\displaystyle x_{1}c_{1}+x_{2}c_{2}\geq 0}for all points(c1,c2){\displaystyle (c_{1},c_{2})}inC{\displaystyle C}, as illustrated in the diagram.
Unlike for the complement of sets mentioned above, it is not in general true that applying the dual cone construction twice gives back the original setC{\displaystyle C}. Instead,C∗∗{\displaystyle C^{**}}is the smallest cone[8]containingC{\displaystyle C}which may be bigger thanC{\displaystyle C}. Therefore this duality is weaker than the one above, in that
The other two properties carry over without change:
A very important example of a duality arises inlinear algebraby associating to anyvector spaceVitsdual vector spaceV*. Its elements are thelinear functionalsφ:V→K{\displaystyle \varphi :V\to K}, whereKis thefieldover whichVis defined.
The three properties of the dual cone carry over to this type of duality by replacing subsets ofR2{\displaystyle \mathbb {R} ^{2}}by vector space and inclusions of such subsets by linear maps. That is:
A particular feature of this duality is thatVandV*are isomorphic for certain objects, namely finite-dimensional vector spaces. However, this is in a sense a lucky coincidence, for giving such an isomorphism requires a certain choice, for example the choice of abasisofV. This is also true in the case ifVis aHilbert space,viatheRiesz representation theorem.
In all the dualities discussed before, the dual of an object is of the same kind as the object itself. For example, the dual of a vector space is again a vector space. Many duality statements are not of this kind. Instead, such dualities reveal a close relation between objects of seemingly different nature. One example of such a more general duality is fromGalois theory. For a fixedGalois extensionK/F, one may associate theGalois groupGal(K/E)to any intermediate fieldE(i.e.,F⊆E⊆K). This group is a subgroup of the Galois groupG= Gal(K/F). Conversely, to any such subgroupH⊆Gthere is the fixed fieldKHconsisting of elements fixed by the elements inH.
Compared to the above, this duality has the following features:
Given aposetP= (X, ≤)(short for partially ordered set; i.e., a set that has a notion of ordering but in which two elements cannot necessarily be placed in order relative to each other), thedualposetPd= (X, ≥)comprises the same ground set but theconverse relation. Familiar examples of dual partial orders include
Aduality transformis aninvolutive antiautomorphismfof apartially ordered setS, that is, anorder-reversinginvolutionf:S→S.[9][10]In several important cases these simple properties determine the transform uniquely up to some simple symmetries. For example, iff1,f2are two duality transforms then theircompositionis anorder automorphismofS; thus, any two duality transforms differ only by an order automorphism. For example, all order automorphisms of apower setS= 2Rare induced by permutations ofR.
A concept defined for a partial orderPwill correspond to adual concepton the dual posetPd. For instance, aminimal elementofPwill be amaximal elementofPd: minimality and maximality are dual concepts in order theory. Other pairs of dual concepts areupper and lower bounds,lower setsandupper sets, andidealsandfilters.
In topology,open setsandclosed setsare dual concepts: the complement of an open set is closed, and vice versa. Inmatroidtheory, the family of sets complementary to the independent sets of a given matroid themselves form another matroid, called thedual matroid.
There are many distinct but interrelated dualities in which geometric or topological objects correspond to other objects of the same type, but with a reversal of the dimensions of the features of the objects. A classical example of this is the duality of thePlatonic solids, in which the cube and the octahedron form a dual pair, the dodecahedron and the icosahedron form a dual pair, and the tetrahedron is self-dual. Thedual polyhedronof any of these polyhedra may be formed as theconvex hullof the center points of each face of the primal polyhedron, so theverticesof the dual correspond one-for-one with the faces of the primal. Similarly, each edge of the dual corresponds to an edge of the primal, and each face of the dual corresponds to a vertex of the primal. These correspondences are incidence-preserving: if two parts of the primal polyhedron touch each other, so do the corresponding two parts of thedual polyhedron. More generally, using the concept ofpolar reciprocation, anyconvex polyhedron, or more generally anyconvex polytope, corresponds to adual polyhedronor dual polytope, with ani-dimensional feature of ann-dimensional polytope corresponding to an(n−i− 1)-dimensional feature of the dual polytope. The incidence-preserving nature of the duality is reflected in the fact that theface latticesof the primal and dual polyhedra or polytopes are themselvesorder-theoretic duals. Duality of polytopes and order-theoretic duality are bothinvolutions: the dual polytope of the dual polytope of any polytope is the original polytope, and reversing all order-relations twice returns to the original order. Choosing a different center of polarity leads to geometrically different dual polytopes, but all have the same combinatorial structure.
From any three-dimensional polyhedron, one can form aplanar graph, the graph of its vertices and edges. The dual polyhedron has adual graph, a graph with one vertex for each face of the polyhedron and with one edge for every two adjacent faces. The same concept of planar graph duality may be generalized to graphs that are drawn in the plane but that do not come from a three-dimensional polyhedron, or more generally tograph embeddingson surfaces of higher genus: one may draw a dual graph by placing one vertex within each region bounded by a cycle of edges in the embedding, and drawing an edge connecting any two regions that share a boundary edge. An important example of this type comes fromcomputational geometry: the duality for any finite setSof points in the plane between theDelaunay triangulationofSand theVoronoi diagramofS. As with dual polyhedra and dual polytopes, the duality of graphs on surfaces is a dimension-reversing involution: each vertex in the primal embedded graph corresponds to a region of the dual embedding, each edge in the primal is crossed by an edge in the dual, and each region of the primal corresponds to a vertex of the dual. The dual graph depends on how the primal graph is embedded: different planar embeddings of a single graph may lead to different dual graphs.Matroid dualityis an algebraic extension of planar graph duality, in the sense that the dual matroid of the graphic matroid of a planar graph is isomorphic to the graphic matroid of the dual graph.
A kind of geometric duality also occurs inoptimization theory, but not one that reverses dimensions. Alinear programmay be specified by a system of real variables (the coordinates for a point in Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}),a system of linear constraints (specifying that the point lie in ahalfspace; the intersection of these halfspaces is a convex polytope, the feasible region of the program), and a linear function (what to optimize). Every linear program has adual problemwith the same optimal solution, but the variables in the dual problem correspond to constraints in the primal problem and vice versa.
In logic, functions or relationsAandBare considered dual ifA(¬x) = ¬B(x), where ¬ islogical negation. The basic duality of this type is the duality of the ∃ and ∀quantifiersin classical logic. These are dual because∃x.¬P(x)and¬∀x.P(x)are equivalent for all predicatesPin classical logic: if there exists anxfor whichPfails to hold, then it is false thatPholds for allx(but the converse does not hold constructively). From this fundamental logical duality follow several others:
Other analogous dualities follow from these:
The dual of the dual, called thebidualordouble dual, depending on context, is often identical to the original (also calledprimal), and duality is an involution. In this case the bidual is not usually distinguished, and instead one only refers to the primal and dual. For example, the dual poset of the dual poset is exactly the original poset, since the converse relation is defined by an involution.
In other cases, the bidual is not identical with the primal, though there is often a close connection. For example, the dual cone of the dual cone of a set contains the primal set (it is the smallest cone containing the primal set), and is equal if and only if the primal set is a cone.
An important case is for vector spaces, where there is a map from the primal space to the double dual,V→V**, known as the "canonical evaluation map". For finite-dimensional vector spaces this is an isomorphism, but these are not identical spaces: they are different sets. In category theory, this is generalized by§ Dual objects, and a "natural transformation" from theidentity functorto the double dual functor. For vector spaces (considered algebraically), this is always an injection; seeDual space § Injection into the double-dual. This can be generalized algebraically to adual module. There is still a canonical evaluation map, but it is not always injective; if it is, this is known as atorsionless module; if it is an isomophism, the module is called reflexive.
Fortopological vector spaces(includingnormed vector spaces), there is a separate notion of atopological dual, denotedV′{\displaystyle V'}to distinguish from the algebraic dualV*, with different possible topologies on the dual, each of which defines a different bidual spaceV″{\displaystyle V''}. In these cases the canonical evaluation mapV→V″{\displaystyle V\to V''}is not in general an isomorphism. If it is, this is known (for certainlocally convexvector spaces with thestrong dual spacetopology) as areflexive space.
In other cases, showing a relation between the primal and bidual is a significant result, as inPontryagin duality(alocally compact abelian groupis naturally isomorphic to its bidual).
A group of dualities can be described by endowing, for any mathematical objectX, the set of morphismsHom (X,D)into some fixed objectD, with a structure similar to that ofX. This is sometimes calledinternal Hom. In general, this yields a true duality only for specific choices ofD, in which caseX*= Hom (X,D)is referred to as thedualofX. There is always a map fromXto thebidual, that is to say, the dual of the dual,X→X∗∗:=(X∗)∗=Hom(Hom(X,D),D).{\displaystyle X\to X^{**}:=(X^{*})^{*}=\operatorname {Hom} (\operatorname {Hom} (X,D),D).}It assigns to somex∈Xthe map that associates to any mapf:X→D(i.e., an element inHom(X,D)) the valuef(x).
Depending on the concrete duality considered and also depending on the objectX, this map may or may not be an isomorphism.
The construction of the dual vector spaceV∗=Hom(V,K){\displaystyle V^{*}=\operatorname {Hom} (V,K)}mentioned in the introduction is an example of such a duality. Indeed, the set of morphisms, i.e.,linear maps, forms a vector space in its own right. The mapV→V**mentioned above is always injective. It is surjective, and therefore an isomorphism, if and only if thedimensionofVis finite. This fact characterizes finite-dimensional vector spaces without referring to a basis.
A vector spaceVis isomorphic toV∗precisely ifVis finite-dimensional. In this case, such an isomorphism is equivalent to a non-degeneratebilinear formφ:V×V→K{\displaystyle \varphi :V\times V\to K}In this caseVis called aninner product space.
For example, ifKis the field ofrealorcomplex numbers, anypositive definitebilinear form gives rise to such an isomorphism. InRiemannian geometry,Vis taken to be thetangent spaceof amanifoldand such positive bilinear forms are calledRiemannian metrics. Their purpose is to measure angles and distances. Thus, duality is a foundational basis of this branch of geometry. Another application of inner product spaces is theHodge starwhich provides a correspondence between the elements of theexterior algebra. For ann-dimensional vector space, the Hodge star operator mapsk-formsto(n−k)-forms. This can be used to formulateMaxwell's equations. In this guise, the duality inherent in the inner product space exchanges the role ofmagneticandelectric fields.
In someprojective planes, it is possible to findgeometric transformationsthat map each point of the projective plane to a line, and each line of the projective plane to a point, in an incidence-preserving way.[11]For such planes there arises a general principle ofduality in projective planes: given any theorem in such a plane projective geometry, exchanging the terms "point" and "line" everywhere results in a new, equally valid theorem.[12]A simple example is that the statement "two points determine a unique line, the line passing through these points" has the dual statement that "two lines determine a unique point, theintersection pointof these two lines". For further examples, seeDual theorems.
A conceptual explanation of this phenomenon in some planes (notably field planes) is offered by the dual vector space. In fact, the points in the projective planeRP2{\displaystyle \mathbb {RP} ^{2}}correspond to one-dimensional subvector spacesV⊂R3{\displaystyle V\subset \mathbb {R} ^{3}}[13]while the lines in the projective plane correspond to subvector spacesW{\displaystyle W}of dimension 2. The duality in such projective geometries stems from assigning to a one-dimensionalV{\displaystyle V}the subspace of(R3)∗{\displaystyle (\mathbb {R} ^{3})^{*}}consisting of those linear mapsf:R3→R{\displaystyle f:\mathbb {R} ^{3}\to \mathbb {R} }which satisfyf(V)=0{\displaystyle f(V)=0}. As a consequence of thedimension formulaoflinear algebra, this space is two-dimensional, i.e., it corresponds to a line in the projective plane associated to(R3)∗{\displaystyle (\mathbb {R} ^{3})^{*}}.
The (positive definite) bilinear form⟨⋅,⋅⟩:R3×R3→R,⟨x,y⟩=∑i=13xiyi{\displaystyle \langle \cdot ,\cdot \rangle :\mathbb {R} ^{3}\times \mathbb {R} ^{3}\to \mathbb {R} ,\langle x,y\rangle =\sum _{i=1}^{3}x_{i}y_{i}}yields an identification of this projective plane with theRP2{\displaystyle \mathbb {RP} ^{2}}. Concretely, the duality assigns toV⊂R3{\displaystyle V\subset \mathbb {R} ^{3}}itsorthogonal{w∈R3,⟨v,w⟩=0for allv∈V}{\displaystyle \left\{w\in \mathbb {R} ^{3},\langle v,w\rangle =0{\text{ for all }}v\in V\right\}}. The explicit formulas induality in projective geometryarise by means of this identification.
In the realm oftopological vector spaces, a similar construction exists, replacing the dual by thetopological dualvector space. There are several notions of topological dual space, and each of them gives rise to a certain concept of duality. A topological vector spaceX{\displaystyle X}that is canonically isomorphic to its bidualX″{\displaystyle X''}is called areflexive space:X≅X″.{\displaystyle X\cong X''.}
Examples:
Thedual latticeof alatticeLis given byHom(L,Z),{\displaystyle \operatorname {Hom} (L,\mathbb {Z} ),}the set of linear functions on thereal vector spacecontaining the lattice that map the points of the lattice to the integersZ{\displaystyle \mathbb {Z} }. This is used in the construction oftoric varieties.[16]ThePontryagin dualoflocally compacttopological groupsGis given byHom(G,S1),{\displaystyle \operatorname {Hom} (G,S^{1}),}continuousgroup homomorphismswith values in the circle (with multiplication of complex numbers as group operation).
In another group of dualities, the objects of one theory are translated into objects of another theory and the maps between objects in the first theory are translated into morphisms in the second theory, but with direction reversed. Using the parlance ofcategory theory, this amounts to acontravariant functorbetween twocategoriesCandD:
which for any two objectsXandYofCgives a map
That functor may or may not be anequivalence of categories. There are various situations, where such a functor is an equivalence between theopposite categoryCopofC, andD. Using a duality of this type, every statement in the first theory can be translated into a "dual" statement in the second theory, where the direction of all arrows has to be reversed.[17]Therefore, any duality between categoriesCandDis formally the same as an equivalence betweenCandDop(CopandD). However, in many circumstances the opposite categories have no inherent meaning, which makes duality an additional, separate concept.[18]
A category that is equivalent to its dual is calledself-dual. An example of self-dual category is the category ofHilbert spaces.[19]
Manycategory-theoreticnotions come in pairs in the sense that they correspond to each other while considering the opposite category. For example,Cartesian productsY1×Y2anddisjoint unionsY1⊔Y2of sets are dual to each other in the sense that
and
for any setX. This is a particular case of a more general duality phenomenon, under whichlimitsin a categoryCcorrespond tocolimitsin the opposite categoryCop; further concrete examples of this areepimorphismsvs.monomorphism, in particularfactor modules(or groups etc.) vs.submodules,direct productsvs.direct sums(also calledcoproductsto emphasize the duality aspect). Therefore, in some cases, proofs of certain statements can be halved, using such a duality phenomenon. Further notions displaying related by such a categorical duality areprojectiveandinjective modulesinhomological algebra,[20]fibrationsandcofibrationsin topology and more generallymodel categories.[21]
TwofunctorsF:C→DandG:D→Careadjointif for all objectscinCanddinD
in a natural way. Actually, the correspondence of limits and colimits is an example of adjoints, since there is an adjunction
between the colimit functor that assigns to any diagram inCindexed by some categoryIits colimit and the diagonal functor that maps any objectcofCto the constant diagram which hascat all places. Dually,
Gelfand dualityis a duality between commutativeC*-algebrasAandcompactHausdorff spacesXis the same: it assigns toXthe space of continuous functions (which vanish at infinity) fromXtoC, the complex numbers. Conversely, the spaceXcan be reconstructed fromAas thespectrumofA. Both Gelfand and Pontryagin duality can be deduced in a largely formal, category-theoretic way.[22]
In a similar vein there is a duality inalgebraic geometrybetweencommutative ringsandaffine schemes: to every commutative ringAthere is an affine spectrum,SpecA. Conversely, given an affine schemeS, one gets back a ring by taking global sections of thestructure sheafOS. In addition,ring homomorphismsare in one-to-one correspondence with morphisms of affine schemes, thereby there is an equivalence
Affine schemes are the local building blocks ofschemes. The previous result therefore tells that the local theory of schemes is the same ascommutative algebra, the study of commutative rings.
Noncommutative geometrydraws inspiration from Gelfand duality and studies noncommutative C*-algebras as if they were functions on some imagined space.Tannaka–Krein dualityis a non-commutative analogue of Pontryagin duality.[24]
In a number of situations, the two categories which are dual to each other are actually arising frompartially orderedsets, i.e., there is some notion of an object "being smaller" than another one. A duality that respects the orderings in question is known as aGalois connection. An example is the standard duality inGalois theorymentioned in the introduction: a bigger field extension corresponds—under the mapping that assigns to any extensionL⊃K(inside some fixed bigger field Ω) the Galois group Gal (Ω /L) —to a smaller group.[25]
The collection of all open subsets of a topological spaceXforms a completeHeyting algebra. There is a duality, known asStone duality, connectingsober spacesand spatiallocales.
Pontryagin dualitygives a duality on the category oflocally compactabelian groups: given any such groupG, thecharacter group
given by continuous group homomorphisms fromGto thecircle groupS1can be endowed with thecompact-open topology. Pontryagin duality states that the character group is again locally compact abelian and that
Moreover,discrete groupscorrespond tocompact abelian groups; finite groups correspond to finite groups. On the one hand, Pontryagin is a special case of Gelfand duality. On the other hand, it is the conceptual reason ofFourier analysis, see below.
Inanalysis, problems are frequently solved by passing to the dual description of functions and operators.
Fourier transformswitches between functions on a vector space and its dual:f^(ξ):=∫−∞∞f(x)e−2πixξdx,{\displaystyle {\widehat {f}}(\xi ):=\int _{-\infty }^{\infty }f(x)\ e^{-2\pi ix\xi }\,dx,}and converselyf(x)=∫−∞∞f^(ξ)e2πixξdξ.{\displaystyle f(x)=\int _{-\infty }^{\infty }{\widehat {f}}(\xi )\ e^{2\pi ix\xi }\,d\xi .}Iffis anL2-functiononRorRN, say, then so isf^{\displaystyle {\widehat {f}}}andf(−x)=f^^(x){\displaystyle f(-x)={\widehat {\widehat {f}}}(x)}. Moreover, the transform interchanges operations of multiplication andconvolutionon the correspondingfunction spaces. A conceptual explanation of the Fourier transform is obtained by the aforementioned Pontryagin duality, applied to the locally compact groupsR(orRNetc.): any character ofRis given byξ↦e−2πixξ. The dualizing character of Fourier transform has many other manifestations, for example, in alternative descriptions ofquantum mechanicalsystems in terms of coordinate and momentum representations.
Theorems showing that certain objects of interest are thedual spaces(in the sense of linear algebra) of other objects of interest are often calleddualities. Many of these dualities are given by abilinear pairingof twoK-vector spaces
Forperfect pairings, there is, therefore, an isomorphism ofAto thedualofB.
Poincaré dualityof a smooth compactcomplex manifoldXis given by a pairing of singular cohomology withC-coefficients (equivalently,sheaf cohomologyof theconstant sheafC)
wherenis the (complex) dimension ofX.[27]Poincaré duality can also be expressed as a relation ofsingular homologyandde Rham cohomology, by asserting that the map
(integrating a differentialk-form over a (2n−k)-(real-)dimensional cycle) is a perfect pairing.
Poincaré duality also reverses dimensions; it corresponds to the fact that, if a topologicalmanifoldis represented as acell complex, then the dual of the complex (a higher-dimensional generalization of the planar graph dual) represents the same manifold. In Poincaré duality, this homeomorphism is reflected in an isomorphism of thekthhomologygroup and the (n−k)thcohomologygroup.
The same duality pattern holds for a smoothprojective varietyover aseparably closed field, usingl-adic cohomologywithQℓ-coefficients instead.[28]This is further generalized to possiblysingular varieties, usingintersection cohomologyinstead, a duality calledVerdier duality.[29]Serre dualityorcoherent dualityare similar to the statements above, but applies to cohomology ofcoherent sheavesinstead.[30]
With increasing level of generality, it turns out, an increasing amount of technical background is helpful or necessary to understand these theorems: the modern formulation of these dualities can be done usingderived categoriesand certaindirect and inverse image functors of sheaves(with respect to the classical analytical topology on manifolds for Poincaré duality, l-adic sheaves and theétale topologyin the second case, and with respect to coherent sheaves for coherent duality).
Yet another group of similar duality statements is encountered inarithmetics: étale cohomology offinite,localandglobal fields(also known asGalois cohomology, since étale cohomology over a field is equivalent togroup cohomologyof the (absolute)Galois groupof the field) admit similar pairings. The absolute Galois groupG(Fq) of a finite field, for example, is isomorphic toZ^{\displaystyle {\widehat {\mathbf {Z} }}}, theprofinite completionofZ, the integers. Therefore, the perfect pairing (for anyG-moduleM)
is a direct consequence ofPontryagin dualityof finite groups. For local and global fields, similar statements exist (local dualityand global orPoitou–Tate duality).[32]
|
https://en.wikipedia.org/wiki/Duality_(mathematics)
|
Ingame theory, ann-player gameis a game which is well defined for any number of players. This is usually used in contrast to standard2-player gamesthat are only specified for two players. In definingn-player games,game theoristsusually provide a definition that allow for any (finite) number of players.[1]The limiting case ofn→∞{\displaystyle n\to \infty }is the subject ofmean field game theory.[2]
Changing games from 2-player games ton-player games entails some concerns. For instance, thePrisoner's dilemmais a 2-player game. One might define ann-player Prisoner's Dilemma where a single defection results everyone else getting the sucker's payoff. Alternatively, it might take certain amount of defection before the cooperators receive the sucker's payoff. (One example of ann-player Prisoner's Dilemma is theDiner's dilemma.)
n-player games can not be solved usingminimax, the theorem that is the basis of tree searching for2-player games. Other algorithms, likemaxn, are required for traversing the game tree to optimize the score for a specific player.[3]
Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Maxn_algorithm
|
In the jargon ofcomputer programming, asource upgradeis a modification of acomputer program'ssource code, which adds new features and options to it, improves performance and stability, or fixesbugsand errors from the previousversion. There are two popular types of source upgrades, which are listed here:
Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Source_upgrade
|
Inmathematics,complex geometryis the study ofgeometricstructures and constructions arising out of, or described by, thecomplex numbers. In particular, complex geometry is concerned with the study ofspacessuch ascomplex manifoldsandcomplex algebraic varieties, functions ofseveral complex variables, and holomorphic constructions such asholomorphic vector bundlesandcoherent sheaves. Application of transcendental methods toalgebraic geometryfalls in this category, together with more geometric aspects ofcomplex analysis.
Complex geometry sits at the intersection of algebraic geometry,differential geometry, and complex analysis, and uses tools from all three areas. Because of the blend of techniques and ideas from various areas, problems in complex geometry are often more tractable or concrete than in general. For example, the classification of complex manifolds and complex algebraic varieties through theminimal model programand the construction ofmoduli spacessets the field apart from differential geometry, where the classification of possiblesmooth manifoldsis a significantly harder problem. Additionally, the extra structure of complex geometry allows, especially in thecompactsetting, forglobal analyticresults to be proven with great success, includingShing-Tung Yau's proof of theCalabi conjecture, theHitchin–Kobayashi correspondence, thenonabelian Hodge correspondence, and existence results forKähler–Einstein metricsandconstant scalar curvature Kähler metrics. These results often feed back into complex algebraic geometry, and for example recently the classification of Fano manifolds usingK-stabilityhas benefited tremendously both from techniques in analysis and in purebirational geometry.
Complex geometry has significant applications to theoretical physics, where it is essential in understandingconformal field theory,string theory, andmirror symmetry. It is often a source of examples in other areas of mathematics, including inrepresentation theorywheregeneralized flag varietiesmay be studied using complex geometry leading to theBorel–Weil–Bott theorem, or insymplectic geometry, whereKähler manifoldsare symplectic, inRiemannian geometrywhere complex manifolds provide examples of exotic metric structures such asCalabi–Yau manifoldsandhyperkähler manifolds, and ingauge theory, whereholomorphic vector bundlesoften admit solutions to importantdifferential equationsarising out of physics such as theYang–Mills equations. Complex geometry additionally is impactful in pure algebraic geometry, where analytic results in the complex setting such asHodge theoryof Kähler manifolds inspire understanding ofHodge structuresforvarietiesandschemesas well asp-adic Hodge theory,deformation theoryfor complex manifolds inspires understanding of the deformation theory of schemes, and results about thecohomologyof complex manifolds inspired the formulation of theWeil conjecturesandGrothendieck'sstandard conjectures. On the other hand, results and techniques from many of these fields often feed back into complex geometry, and for example developments in the mathematics of string theory and mirror symmetry have revealed much about the nature ofCalabi–Yau manifolds, which string theorists predict should have the structure of Lagrangian fibrations through theSYZ conjecture, and the development ofGromov–Witten theoryofsymplectic manifoldshas led to advances inenumerative geometryof complex varieties.
TheHodge conjecture, one of themillennium prize problems, is a problem in complex geometry.[1]
Broadly, complex geometry is concerned withspacesandgeometric objectswhich are modelled, in some sense, on thecomplex plane. Features of the complex plane andcomplex analysisof a single variable, such as an intrinsic notion oforientability(that is, being able to consistently rotate 90 degrees counterclockwise at every point in the complex plane), and the rigidity ofholomorphic functions(that is, the existence of a single complex derivative implies complex differentiability to all orders) are seen to manifest in all forms of the study of complex geometry. As an example, every complex manifold is canonically orientable, and a form ofLiouville's theoremholds oncompactcomplex manifolds orprojectivecomplex algebraic varieties.
Complex geometry is different in flavour to what might be calledrealgeometry, the study of spaces based around the geometric and analytical properties of thereal number line. For example, whereassmooth manifoldsadmitpartitions of unity, collections of smooth functions which can be identically equal to one on someopen set, and identically zero elsewhere, complex manifolds admit no such collections of holomorphic functions. Indeed, this is the manifestation of theidentity theorem, a typical result in complex analysis of a single variable. In some sense, the novelty of complex geometry may be traced back to this fundamental observation.
It is true that every complex manifold is in particular a real smooth manifold. This is because the complex planeC{\displaystyle \mathbb {C} }is, after forgetting its complex structure, isomorphic to the real planeR2{\displaystyle \mathbb {R} ^{2}}. However, complex geometry is not typically seen as a particular sub-field ofdifferential geometry, the study of smooth manifolds. In particular,Serre'sGAGA theoremsays that everyprojectiveanalytic varietyis actually analgebraic variety, and the study of holomorphic data on an analytic variety is equivalent to the study of algebraic data.
This equivalence indicates that complex geometry is in some sense closer toalgebraic geometrythan todifferential geometry. Another example of this which links back to the nature of the complex plane is that, in complex analysis of a single variable, singularities ofmeromorphic functionsare readily describable. In contrast, the possible singular behaviour of a continuous real-valued function is much more difficult to characterise. As a result of this, one can readily studysingularspaces in complex geometry, such as singular complexanalytic varietiesor singular complex algebraic varieties, whereas in differential geometry the study of singular spaces is often avoided.
In practice, complex geometry sits in the intersection of differential geometry, algebraic geometry, andanalysisinseveral complex variables, and a complex geometer uses tools from all three fields to study complex spaces. Typical directions of interest in complex geometry involveclassificationof complex spaces, the study of holomorphic objects attached to them (such asholomorphic vector bundlesandcoherent sheaves), and the intimate relationships between complex geometric objects and other areas of mathematics and physics.
Complex geometry is concerned with the study ofcomplex manifolds, andcomplex algebraicandcomplex analytic varieties. In this section, these types of spaces are defined and the relationships between them presented.
Acomplex manifoldis atopological spaceX{\displaystyle X}such that:
Notice that since every biholomorphism is adiffeomorphism, andCn{\displaystyle \mathbb {C} ^{n}}is isomorphism as areal vector spacetoR2n{\displaystyle \mathbb {R} ^{2n}}, every complex manifold of dimensionn{\displaystyle n}is in particular a smooth manifold of dimension2n{\displaystyle 2n}, which is always an even number.
In contrast to complex manifolds which are always smooth, complex geometry is also concerned with possibly singular spaces. Anaffine complex analytic varietyis a subsetX⊆Cn{\displaystyle X\subseteq \mathbb {C} ^{n}}such that about each pointp∈X{\displaystyle p\in X}, there is an open neighbourhoodU{\displaystyle U}ofp{\displaystyle p}and a collection of finitely many holomorphic functionsf1,…,fk:U→C{\displaystyle f_{1},\dots ,f_{k}:U\to \mathbb {C} }such thatX∩U={z∈U∣f1(z)=⋯=fk(z)=0}=Z(f1,…,fk){\displaystyle X\cap U=\{z\in U\mid f_{1}(z)=\cdots =f_{k}(z)=0\}=Z(f_{1},\dots ,f_{k})}. By convention we also require the setX{\displaystyle X}to beirreducible. A pointp∈X{\displaystyle p\in X}issingularif theJacobian matrixof the vector of holomorphic functions(f1,…,fk){\displaystyle (f_{1},\dots ,f_{k})}does not have full rank atp{\displaystyle p}, andnon-singularotherwise. Aprojective complex analytic varietyis a subsetX⊆CPn{\displaystyle X\subseteq \mathbb {CP} ^{n}}ofcomplex projective spacethat is, in the same way, locally given by the zeroes of a finite collection of holomorphic functions on open subsets ofCPn{\displaystyle \mathbb {CP} ^{n}}.
One may similarly define anaffine complex algebraic varietyto be a subsetX⊆Cn{\displaystyle X\subseteq \mathbb {C} ^{n}}which is locally given as the zero set of finitely many polynomials inn{\displaystyle n}complex variables. To define aprojective complex algebraic variety, one requires the subsetX⊆CPn{\displaystyle X\subseteq \mathbb {CP} ^{n}}to locally be given by the zero set of finitely manyhomogeneous polynomials.
In order to define a general complex algebraic or complex analytic variety, one requires the notion of alocally ringed space. Acomplex algebraic/analytic varietyis a locally ringed space(X,OX){\displaystyle (X,{\mathcal {O}}_{X})}which is locally isomorphic as a locally ringed space to an affine complex algebraic/analytic variety. In the analytic case, one typically allowsX{\displaystyle X}to have a topology that is locally equivalent to the subspace topology due to the identification with open subsets ofCn{\displaystyle \mathbb {C} ^{n}}, whereas in the algebraic caseX{\displaystyle X}is often equipped with aZariski topology. Again we also by convention require this locally ringed space to be irreducible.
Since the definition of a singular point is local, the definition given for an affine analytic/algebraic variety applies to the points of any complex analytic or algebraic variety. The set of points of a varietyX{\displaystyle X}which are singular is called thesingular locus, denotedXsing{\displaystyle X^{sing}}, and the complement is thenon-singularorsmooth locus, denotedXnonsing{\displaystyle X^{nonsing}}. We say a complex variety issmoothornon-singularif it's singular locus is empty. That is, if it is equal to its non-singular locus.
By theimplicit function theoremfor holomorphic functions, every complex manifold is in particular a non-singular complex analytic variety, but is not in general affine or projective. By Serre's GAGA theorem, every projective complex analytic variety is actually a projective complex algebraic variety. When a complex variety is non-singular, it is a complex manifold. More generally, the non-singular locus ofanycomplex variety is a complex manifold.
Complex manifolds may be studied from the perspective of differential geometry, whereby they are equipped with extra geometric structures such as aRiemannian metricorsymplectic form. In order for this extra structure to be relevant to complex geometry, one should ask for it to be compatible with the complex structure in a suitable sense. AKähler manifoldis a complex manifold with a Riemannian metric and symplectic structure compatible with the complex structure. Every complex submanifold of a Kähler manifold is Kähler, and so in particular every non-singular affine or projective complex variety is Kähler, after restricting the standard Hermitian metric onCn{\displaystyle \mathbb {C} ^{n}}or theFubini-Study metriconCPn{\displaystyle \mathbb {CP} ^{n}}respectively.
Other important examples of Kähler manifolds includeRiemann surfaces,K3 surfaces, andCalabi–Yau manifolds.
Serre's GAGA theorem asserts that projective complex analytic varieties are actually algebraic. Whilst this is not strictly true for affine varieties, there is a class of complex manifolds that act very much like affine complex algebraic varieties, calledStein manifolds. A manifoldX{\displaystyle X}is Stein if it is holomorphically convex and holomorphically separable (see the article on Stein manifolds for the technical definitions). It can be shown however that this is equivalent toX{\displaystyle X}being a complex submanifold ofCn{\displaystyle \mathbb {C} ^{n}}for somen{\displaystyle n}. Another way in which Stein manifolds are similar to affine complex algebraic varieties is thatCartan's theorems A and Bhold for Stein manifolds.
Examples of Stein manifolds include non-compact Riemann surfaces and non-singular affine complex algebraic varieties.
A special class of complex manifolds ishyper-Kähler manifolds, which areRiemannian manifoldsadmitting three distinct compatibleintegrable almost complex structuresI,J,K{\displaystyle I,J,K}which satisfy thequaternionic relationsI2=J2=K2=IJK=−Id{\displaystyle I^{2}=J^{2}=K^{2}=IJK=-\operatorname {Id} }. Thus, hyper-Kähler manifolds are Kähler manifolds in three different ways, and subsequently have a rich geometric structure.
Examples of hyper-Kähler manifolds includeALE spaces,K3 surfaces,Higgs bundlemoduli spaces,quiver varieties, and many othermoduli spacesarising out ofgauge theoryandrepresentation theory.
As mentioned, a particular class of Kähler manifolds is given by Calabi–Yau manifolds. These are given by Kähler manifolds with trivial canonical bundleKX=ΛnT1,0∗X{\displaystyle K_{X}=\Lambda ^{n}T_{1,0}^{*}X}. Typically the definition of a Calabi–Yau manifold also requiresX{\displaystyle X}to be compact. In this caseYau'sproof of theCalabi conjectureimplies thatX{\displaystyle X}admits a Kähler metric with vanishingRicci curvature, and this may be taken as an equivalent definition of Calabi–Yau.
Calabi–Yau manifolds have found use instring theoryandmirror symmetry, where they are used to model the extra 6 dimensions of spacetime in 10-dimensional models of string theory. Examples of Calabi–Yau manifolds are given byelliptic curves, K3 surfaces, and complexAbelian varieties.
A complexFano varietyis a complex algebraic variety withampleanti-canonical line bundle (that is,KX∗{\displaystyle K_{X}^{*}}is ample). Fano varieties are of considerable interest in complex algebraic geometry, and in particularbirational geometry, where they often arise in theminimal model program. Fundamental examples of Fano varieties are given by projective spaceCPn{\displaystyle \mathbb {CP} ^{n}}whereK=O(−n−1){\displaystyle K={\mathcal {O}}(-n-1)}, and smooth hypersurfaces ofCPn{\displaystyle \mathbb {CP} ^{n}}of degree less thann+1{\displaystyle n+1}.
Toric varietiesare complex algebraic varieties of dimensionn{\displaystyle n}containing an opendense subsetbiholomorphic to(C∗)n{\displaystyle (\mathbb {C} ^{*})^{n}}, equipped with an action of(C∗)n{\displaystyle (\mathbb {C} ^{*})^{n}}which extends the action on the open dense subset. A toric variety may be described combinatorially by itstoric fan, and at least when it is non-singular, by amomentpolytope. This is a polygon inRn{\displaystyle \mathbb {R} ^{n}}with the property that any vertex may be put into the standard form of the vertex of the positiveorthantby the action ofGL(n,Z){\displaystyle \operatorname {GL} (n,\mathbb {Z} )}. The toric variety can be obtained as a suitable space which fibres over the polytope.
Many constructions that are performed on toric varieties admit alternate descriptions in terms of the combinatorics and geometry of the moment polytope or its associated toric fan. This makes toric varieties a particularly attractive test case for many constructions in complex geometry. Examples of toric varieties include complex projective spaces, and bundles over them.
Due to the rigidity of holomorphic functions and complex manifolds, the techniques typically used to study complex manifolds and complex varieties differ from those used in regular differential geometry, and are closer to techniques used in algebraic geometry. For example, in differential geometry, many problems are approached by taking local constructions and patching them together globally using partitions of unity. Partitions of unity do not exist in complex geometry, and so the problem of when local data may be glued into global data is more subtle. Precisely when local data may be patched together is measured bysheaf cohomology, andsheavesand theircohomology groupsare major tools.
For example, famous problems in the analysis of several complex variables preceding the introduction of modern definitions are theCousin problems, asking precisely when local meromorphic data may be glued to obtain a global meromorphic function. These old problems can be simply solved after the introduction of sheaves and cohomology groups.
Special examples of sheaves used in complex geometry include holomorphicline bundles(and thedivisorsassociated to them),holomorphic vector bundles, andcoherent sheaves. Since sheaf cohomology measures obstructions in complex geometry, one technique that is used is to prove vanishing theorems. Examples of vanishing theorems in complex geometry include theKodaira vanishing theoremfor the cohomology of line bundles on compact Kähler manifolds, andCartan's theorems A and Bfor the cohomology of coherent sheaves on affine complex varieties.
Complex geometry also makes use of techniques arising out of differential geometry and analysis. For example, theHirzebruch-Riemann-Roch theorem, a special case of theAtiyah-Singer index theorem, computes theholomorphic Euler characteristicof a holomorphic vector bundle in terms of characteristic classes of the underlying smooth complex vector bundle.
One major theme in complex geometry isclassification. Due to the rigid nature of complex manifolds and varieties, the problem of classifying these spaces is often tractable. Classification in complex and algebraic geometry often occurs through the study ofmoduli spaces, which themselves are complex manifolds or varieties whose points classify other geometric objects arising in complex geometry.
The termmoduliwas coined byBernhard Riemannduring his original work on Riemann surfaces. The classification theory is most well-known for compact Riemann surfaces. By theclassification of closed oriented surfaces, compact Riemann surfaces come in a countable number of discrete types, measured by theirgenusg{\displaystyle g}, which is a non-negative integer counting the number of holes in the given compact Riemann surface.
The classification essentially follows from theuniformization theorem, and is as follows:[2][3][4]
Complex geometry is concerned not only with complex spaces, but other holomorphic objects attached to them. The classification of holomorphic line bundles on a complex varietyX{\displaystyle X}is given by thePicard varietyPic(X){\displaystyle \operatorname {Pic} (X)}ofX{\displaystyle X}.
The picard variety can be easily described in the case whereX{\displaystyle X}is a compact Riemann surface of genus g. Namely, in this case the Picard variety is a disjoint union of complexAbelian varieties, each of which is isomorphic to theJacobian varietyof the curve, classifyingdivisorsof degree zero up to linear equivalence. In differential-geometric terms, these Abelian varieties are complex tori, complex manifolds diffeomorphic to(S1)2g{\displaystyle (S^{1})^{2g}}, possibly with one of many different complex structures.
By theTorelli theorem, a compact Riemann surface is determined by its Jacobian variety, and this demonstrates one reason why the study of structures on complex spaces can be useful, in that it can allow one to solve classify the spaces themselves.
|
https://en.wikipedia.org/wiki/Complex_geometry
|
In thephilosophy of mathematics, thepre-intuitionistsis the name given byL. E. J. Brouwerto several influential mathematicians who shared similar opinions on the nature of mathematics. The term was introduced by Brouwer in his 1951 lectures atCambridgewhere he described the differences between his philosophy ofintuitionismand its predecessors:[1]
Of a totally different orientation [from the "Old Formalist School" ofDedekind,Cantor,Peano,Zermelo, andCouturat, etc.] was the Pre-Intuitionist School, mainly led byPoincaré,BorelandLebesgue. These thinkers seem to have maintained a modified observational standpoint for theintroduction of natural numbers, forthe principle of complete induction[...] For these, even for such theorems as were deduced by means of classical logic, they postulated an existence and exactness independent of language and logic and regarded its non-contradictority as certain, even without logical proof. For the continuum, however, they seem not to have sought an origin strictly extraneous to language and logic.
The pre-intuitionists, as defined byL. E. J. Brouwer, differed from theformaliststandpoint in several ways,[1]particularly in regard to the introduction of natural numbers, or how the natural numbers are defined/denoted. ForPoincaré, the definition of a mathematical entity is the construction of the entity itself and not an expression of an underlying essence or existence.
This is to say that no mathematical object exists without human construction of it, both in mind and language.
This sense of definition allowedPoincaréto argue withBertrand RusselloverGiuseppe Peano'saxiomatic theory of natural numbers.
Peano's fifthaxiomstates:
This is the principle ofcomplete induction, which establishes the property ofinductionas necessary to the system. Since Peano's axiom is asinfiniteas thenatural numbers, it is difficult to prove that the property ofPdoes belong to anyxand alsox+ 1. What one can do is say that, if after some numbernof trials that show a propertyPconserved inxandx+ 1, then we may infer that it will still hold to be true aftern+ 1 trials. But this is itself induction. And hence the argumentbegs the question.
From this Poincaré argues that if we fail to establish the consistency of Peano's axioms for natural numbers without falling into circularity, then the principle ofcomplete inductionis not provable bygeneral logic.
Thus arithmetic and mathematics in general is notanalyticbutsynthetic.Logicismthus rebuked andIntuitionis held up. What Poincaré and the Pre-Intuitionists shared was the perception of a difference between logic and mathematics that is not a matter oflanguagealone, but ofknowledgeitself.
It was for this assertion, among others, thatPoincaréwas considered to be similar to the intuitionists. ForBrouwerthough, the Pre-Intuitionists failed to go as far as necessary in divesting mathematics from metaphysics, for they still usedprincipium tertii exclusi(the "law of excluded middle").
The principle of the excluded middle does lead to some strange situations. For instance, statements about the future such as "There will be a naval battle tomorrow" do not seem to be either true or false,yet. So there is some question whether statements must be either true or false in somesituations. To an intuitionist this seems to rank the law of excluded middle as just as unrigorousasPeano'svicious circle.
Yet to the Pre-Intuitionists this is mixing apples and oranges. For them mathematics was one thing (a muddled invention of the human mind,i.e., synthetic), and logic was another (analytic).
The above examples only include the works ofPoincaré, and yetBrouwernamed other mathematicians as Pre-Intuitionists too;BorelandLebesgue. Other mathematicians such asHermann Weyl(who eventually became disenchanted with intuitionism, feeling that it places excessive strictures on mathematical progress) andLeopold Kroneckeralso played a role—though they are not cited by Brouwer in his definitive speech.
In fact Kronecker might be the most famous of the Pre-Intuitionists for his singular and oft quoted phrase, "God made the natural numbers; all else is the work of man."
Kronecker goes in almost the opposite direction from Poincaré, believing in the natural numbers but not the law of the excluded middle. He was the first mathematician to express doubt onnon-constructiveexistence proofsthat state that something must exist because it can be shown that it is "impossible" for it not to.
|
https://en.wikipedia.org/wiki/Preintuitionism
|
Asoftware design description(a.k.a.software design documentorSDD; justdesign document; alsoSoftware Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders.[1]An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
The SDD usually contains the following information:
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
IEEE 1016-2009, titledIEEE Standard for Information Technology—Systems Design—Software Design Descriptions,[2]is anIEEEstandard that specifies "the required information content and organization" for an SDD.[3]IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions."[4]
The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled afterIEEE Std 1471-2000,Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts ofview, viewpoint, stakeholder, and concernfrom architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016,Introduction]
Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use:[5]
In addition, users of the standard are not limited to these viewpoints but may define their own.[6]
IEEE 1016-2009 is currently listed as 'Inactive - Reserved'.[7]
|
https://en.wikipedia.org/wiki/Design_document
|
Theblocking of YouTube videos in Germanywas part of a former dispute between the video sharing platformYouTubeand theGesellschaft für musikalische Aufführungs- und mechanische Vervielfältigungsrechte(GEMA, or "Society for Musical Performance and Mechanical Reproduction Rights" in English), a performance rights organization in Germany.
According to a German court inHamburg,Google's subsidiary YouTube could be held liable for damages when it hosts copyrighted videos without the copyright holder's permission.[1]As a result, music videos formajor labelartists on YouTube, as well as many videos containing background music, have beengeoblockedin Germany since the end of March 2009 after the previous agreement had expired and negotiations for a new license agreement were stopped. On 30 June 2015, Google won a partial victory against GEMA in a state court in Munich, which ruled that they could not be held liable for such damages.[2]
In July 2015, the higher regional court of Hamburg also rejected GEMA's claim for €1.6 million in damages.[3]
In 2016, YouTube and GEMA, who represents 70,000 composers and publishers, reached a settlement agreement. The settlement sum is unknown.
According to Google, GEMA sought to raise its fee charged to YouTube to a "prohibitive" 12 Eurocent per streamed video—a claim that is disputed by GEMA spokesperson Bettina Müller stating their proposal was 1 Eurocent only plus a breakdown by composer.[4][5][6]The issue is set to be taken up by aCaliforniacourt.[7]Google, the world's biggest Internet search engine company, partly lost a German copyright infringement suit over how much it must do to remove illegal music videos from its YouTube website.[8]
A study sponsored by the video hosting websiteMyVideoestimated that 61.5% of the 1000 most viewed YouTube clips are blocked in Germany. This is significantly higher than, for example, in the United States (0.9%) or in Switzerland (1.2%).[9]
Another study found that around 3% of all YouTube videos, and 10% of those videos with over a million views, are blocked in Germany.[10]
Sony Music's CEO of international business,Edgar Berger, said in an interview in February 2012 that the Internet is a blessing for the music industry. Nevertheless, there are still problems that have to be overcome, such as restrictive copyright enforcement by music rights collecting agencies. Berger claims that YouTube revenue running into the millions is being lost because GEMA's policies prevent artist's videos from being shown online in the country.[11]
Conversely, it can be questioned how much of this lost revenue would have actually benefitted GEMA members, given that licensing agreements in other territories are subject to a confidentiality agreement that prevents even the membership of the collecting societies from knowing the royalty rates.[12]
An academic study by Tobias Kretschmer and Christian Peukert published in 2020 shows that the blocking of music videos decreased recorded music sales in Germany by about 5%–10%. The effect is much stronger (more negative) for newcomer artists, and less strong (less negative) for mainstream artists. Also, German artists suffered relatively less from the YouTube blackout and gained market share as a result.[13]
GEMA's stance has elicited considerable criticism from Google and foreign record companies.
Edgar Berger, CEO ofSony Music Entertainmentin Munich, toldBillboard: "I suspect that some members of GEMA's supervisory board have not yet arrived in the digital era. We want to see streaming services likeVEVOand Spotify in the German market. Spotify must not be blocked by GEMA any longer. Artists and music companies are losing sales in the millions".[14]
Google spokesman Kay Oberbeck told Billboard in Hamburg that YouTube had entered into 20 agreements with collection societies from 33 countries. "We therefore regret all the more that GEMA has decided to commence legal proceedings against us despite the promising talks which we have held, thus removing the basis for conducting any further negotiations in a spirit of mutual trust. A solution can only be found at the negotiating table without any legal proceedings. We are prepared to resume negotiations at any time."[14]
Frank Briegmann, President ofUniversal MusicGermany, has described Germany as "a developing country in the digital music market. GEMA apparently has not yet understood the new developments in the international music market".[15]
A common way of viewing blocked videos in Germany is to use browser add-ons that fake a foreignIP address, which are available for all common browsers andSpotify, in some cases these add-ons even come prebundled with the browser setup. Another way is to go through a foreignproxyorVPNserver. Although intellectual property rights in music are licensed by territory, employing such methods tocircumvent local restrictionsis legal.[citation needed]
On 31 October 2016, GEMA released a press statement stating that YouTube will pay GEMA for video views of GEMA-protected artists. No further details regarding payment were disclosed.[16]
|
https://en.wikipedia.org/wiki/Blocking_of_YouTube_videos_in_Germany
|
Inmathematical modeling,overfittingis "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably".[1]Anoverfitted modelis amathematical modelthat contains moreparametersthan can be justified by the data.[2]In the special case where the model consists of a polynomial function, these parameters represent thedegree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., thenoise) as if that variation represented underlying model structure.[3]: 45
Underfittingoccurs when a mathematical model cannot adequately capture the underlying structure of the data. Anunder-fitted modelis a model where some parameters or terms that would appear in a correctly specified model are missing.[2]Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance.
The possibility of over-fitting exists because the criterion used forselecting the modelis not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set oftraining data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend.
As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions.
Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit.[4]Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known asshrinkage).[2]In particular, the value of thecoefficient of determinationwillshrinkrelative to the original data.
To lessen the chance or amount of overfitting, several techniques are available (e.g.,model comparison,cross-validation,regularization,early stopping,pruning,Bayesian priors, ordropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
In statistics, aninferenceis drawn from astatistical model, which has beenselectedvia some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony".[3]The authors also state the following.[3]: 32–33
Overfitted models ... are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). False treatment effects tend to be identified, and false variables are included with overfitted models. ... A best approximating model is achieved by properly balancing the errors of underfitting and overfitting.
Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The bookModel Selection and Model Averaging(2008) puts it this way.[5]
Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is themonkey who typed Hamletactually a good writer?
Inregression analysis, overfitting occurs frequently.[6]As an extreme example, if there arepvariables in alinear regressionwithpdata points, the fitted line can go exactly through every point.[7]Forlogistic regressionor Coxproportional hazards models, there are a variety of rules of thumb (e.g. 5–9,[8]10[9]and 10–15[10]— the guideline of 10 observations per independent variable is known as the "one in ten rule"). In the process of regression model selection, the mean squared error of the random regression function can be split into random noise, approximation bias, and variance in the estimate of the regression function. Thebias–variance tradeoffis often used to overcome overfit models.
With a large set ofexplanatory variablesthat actually have no relation to thedependent variablebeing predicted, some variables will in general be falsely found to bestatistically significantand the researcher may thus retain them in the model, thereby overfitting the model. This is known asFreedman's paradox.
Usually, a learningalgorithmis trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training.
Overfitting is the use of models or procedures that violateOccam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data forycan be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function isa prioriless probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset.[11]
When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) withmparameters to a regression model withnparameters.[11]
Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have nocausal relationto thetarget function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse.
As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again.
Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust."
The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include:
The optimal function usually needs verification on bigger or completely new datasets. There are, however, methods likeminimum spanning treeorlife-time of correlationthat applies the dependence between correlation coefficients and time-series (window width). Whenever the window width is big enough, the correlation coefficients are stable and don't depend on the window width size anymore. Therefore, a correlation matrix can be created by calculating a coefficient of correlation between investigated variables. This matrix can be represented topologically as a complex network where direct and indirect influences between variables are visualized.
Dropout regularisation (random removal of training set data) can also improve robustness and therefore reduce over-fitting by probabilistically removing inputs to a layer.
Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: lowbiasand highvariance). This can be gathered from theBias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (seeGeneralization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6.
Burnham & Anderson state the following.[3]: 32
... an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings.
There are multiple ways to deal with underfitting:
Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest indeep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such aslinear regression. In particular, it has been shown thatoverparameterizationis essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.[16]
|
https://en.wikipedia.org/wiki/Overfitting
|
Instatisticsand in particularstatistical theory,unbiased estimation of a standard deviationis the calculation from astatistical sampleof an estimated value of thestandard deviation(a measure ofstatistical dispersion) of apopulationof values, in such a way that theexpected valueof the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use ofsignificance testsandconfidence intervals, or by usingBayesian analysis.
However, for statistical theory, it provides an exemplar problem in the context ofestimation theorywhich is both simple to state and for which results cannot be obtained in closed form. It also provides an example where imposing the requirement forunbiased estimationmight be seen as just adding inconvenience, with no real benefit.
Instatistics, thestandard deviationof a population of numbers is often estimated from arandom sampledrawn from the population. This is the sample standard deviation, which is defined by
where{x1,x2,…,xn}{\displaystyle \{x_{1},x_{2},\ldots ,x_{n}\}}is the sample (formally, realizations from arandom variableX) andx¯{\displaystyle {\overline {x}}}is thesample mean.
One way of seeing that this is abiased estimatorof the standard deviation of the population is to start from the result thats2is anunbiased estimatorfor thevarianceσ2of the underlying population if that variance exists and the sample values are drawn independently with replacement. The square root is a nonlinear function, and only linear functions commute with taking the expectation. Since the square root is a strictly concave function, it follows fromJensen's inequalitythat the square root of the sample variance is an underestimate.
The use ofn− 1 instead ofnin the formula for the sample variance is known asBessel's correction, which corrects the bias in the estimation of the populationvariance,and some, but not all of the bias in the estimation of the populationstandard deviation.
It is not possible to find an estimate of the standard deviation which is unbiased for all population distributions, as the bias depends on the particular distribution. Much of the following relates to estimation assuming anormal distribution.
When the random variable isnormally distributed, a minor correction exists to eliminate the bias. To derive the correction, note that for normally distributedX,Cochran's theoremimplies that(n−1)s2/σ2{\displaystyle (n-1)s^{2}/\sigma ^{2}}has achi square distributionwithn−1{\displaystyle n-1}degrees of freedomand thus its square root,n−1s/σ{\displaystyle {\sqrt {n-1}}s/\sigma }has achi distributionwithn−1{\displaystyle n-1}degrees of freedom. Consequently, calculating the expectation of this last expression and rearranging constants,
where the correction factorc4(n){\displaystyle c_{4}(n)}is the scale mean of the chi distribution withn−1{\displaystyle n-1}degrees of freedom,μ1/n−1{\displaystyle \mu _{1}/{\sqrt {n-1}}}. This depends on the sample sizen,and is given as follows:[1]
where Γ(·) is thegamma function. An unbiased estimator ofσcan be obtained by dividings{\displaystyle s}byc4(n){\displaystyle c_{4}(n)}. Asn{\displaystyle n}grows large it approaches 1, and even for smaller values the correction is minor. The figure shows a plot ofc4(n){\displaystyle c_{4}(n)}versus sample size. The table below gives numerical values ofc4(n){\displaystyle c_{4}(n)}and algebraic expressions for some values ofn{\displaystyle n}; more complete tables may be found in most textbooks[2][3]onstatistical quality control.
It is important to keep in mind this correction only produces an unbiased estimator for normally and independently distributedX. When this condition is satisfied, another result aboutsinvolvingc4(n){\displaystyle c_{4}(n)}is that thestandard errorofsis[4][5]σ1−c42{\displaystyle \sigma {\sqrt {1-c_{4}^{2}}}}, while thestandard errorof the unbiased estimator isσc4−2−1.{\displaystyle \sigma {\sqrt {c_{4}^{-2}-1}}.}
If calculation of the functionc4(n) appears too difficult, there is a simple rule of thumb[6]to take the estimator
The formula differs from the familiar expression fors2only by havingn− 1.5instead ofn− 1in the denominator. This expression is only approximate; in fact,
The bias is relatively small: say, forn=3{\displaystyle n=3}it is equal to 2.3%, and forn=9{\displaystyle n=9}the bias is already 0.1%.
In cases wherestatistically independentdata are modelled by a parametric family of distributions other than thenormal distribution, the population standard deviation will, if it exists, be a function of the parameters of the model. One general approach to estimation would bemaximum likelihood. Alternatively, it may be possible to use theRao–Blackwell theoremas a route to finding a good estimate of the standard deviation. In neither case would the estimates obtained usually be unbiased. Notionally, theoretical adjustments might be obtainable to lead to unbiased estimates but, unlike those for the normal distribution, these would typically depend on the estimated parameters.
If the requirement is simply to reduce the bias of an estimated standard deviation, rather than to eliminate it entirely, then two practical approaches are available, both within the context ofresampling. These arejackknifingandbootstrapping. Both can be applied either to parametrically based estimates of the standard deviation or to the sample standard deviation.
For non-normal distributions an approximate (up toO(n−1) terms) formula for the unbiased estimator of the standard deviation is
whereγ2denotes the populationexcess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data.
The material above, to stress the point again, applies only to independent data. However, real-world data often does not meet this requirement; it isautocorrelated(also known as serial correlation). As one example, the successive readings of a measurement instrument that incorporates some form of “smoothing” (more correctly, low-pass filtering) process will be autocorrelated, since any particular value is calculated from some combination of the earlier and later readings.
Estimates of the variance, and standard deviation, of autocorrelated data will be biased. The expected value of the sample variance is[7]
wherenis the sample size (number of measurements) andρk{\displaystyle \rho _{k}}is the autocorrelation function (ACF) of the data. (Note that the expression in the brackets is simply one minus the average expected autocorrelation for the readings.) If the ACF consists of positive values then the estimate of the variance (and its square root, the standard deviation) will be biased low. That is, the actual variability of the data will be greater than that indicated by an uncorrected variance or standard deviation calculation. It is essential to recognize that, if this expression is to be used to correct for the bias, by dividing the estimates2{\displaystyle s^{2}}by the quantity in brackets above, then the ACF must be knownanalytically, not via estimation from the data. This is because the estimated ACF will itself be biased.[8]
To illustrate the magnitude of the bias in the standard deviation, consider a dataset that consists of sequential readings from an instrument that uses a specific digital filter whose ACF is known to be given by
whereαis the parameter of the filter, and it takes values from zero to unity. Thus the ACF is positive and geometrically decreasing.
The figure shows the ratio of the estimated standard deviation to its known value (which can be calculated analytically for this digital filter), for several settings ofαas a function of sample sizen. Changingαalters the variance reduction ratio of the filter, which is known to be
so that smaller values ofαresult in more variance reduction, or “smoothing.” The bias is indicated by values on the vertical axis different from unity; that is, if there were no bias, the ratio of the estimated to known standard deviation would be unity. Clearly, for modest sample sizes there can be significant bias (a factor of two, or more).
It is often of interest to estimate the variance or standard deviation of an estimatedmeanrather than the variance of a population. When the data are autocorrelated, this has a direct effect on the theoretical variance of the sample mean, which is[9]
The variance of the sample mean can then be estimated by substituting an estimate ofσ2. One such estimate can be obtained from the equation for E[s2] given above. First define the following constants, assuming, again, aknownACF:
so that
This says that the expected value of the quantity obtained by dividing the observed sample variance by the correction factorγ1{\displaystyle \gamma _{1}}gives an unbiased estimate of the variance. Similarly, re-writing the expression above for the variance of the mean,
and substituting the estimate forσ2{\displaystyle \sigma ^{2}}gives[10]
which is an unbiased estimator of the variance of the mean in terms of the observed sample variance and known quantities. If the autocorrelationsρk{\displaystyle \rho _{k}}are identically zero, this expression reduces to the well-known result for the variance of the mean for independent data. The effect of the expectation operator in these expressions is that the equality holds in the mean (i.e., on average).
Having the expressions above involving thevarianceof the population, and of an estimate of the mean of that population, it would seem logical to simply take the square root of these expressions to obtain unbiased estimates of the respective standard deviations. However it is the case that, since expectations are integrals,
Instead, assume a functionθexists such that an unbiased estimator of the standard deviation can be written
andθdepends on the sample sizenand the ACF. In the case of NID (normally and independently distributed) data, the radicand is unity andθis just thec4function given in the first section above. As withc4,θapproaches unity as the sample size increases (as doesγ1).
It can be demonstrated via simulation modeling that ignoringθ(that is, taking it to be unity) and using
removes all but a few percent of the bias caused by autocorrelation, making this areduced-bias estimator, rather than anunbiased estimator. In practical measurement situations, this reduction in bias can be significant, and useful, even if some relatively small bias remains. The figure above, showing an example of the bias in the standard deviation vs. sample size, is based on this approximation; the actual bias would be somewhat larger than indicated in those graphs since the transformation biasθis not included there.
The unbiased variance of the mean in terms of the population variance and the ACF is given by
and since there are no expected values here, in this case the square root can be taken, so that
Using the unbiased estimate expression above forσ, anestimateof the standard deviation of the mean will then be
If the data are NID, so that the ACF vanishes, this reduces to
In the presence of a nonzero ACF, ignoring the functionθas before leads to thereduced-bias estimator
which again can be demonstrated to remove a useful majority of the bias.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation
|
Alpha–beta pruningis asearch algorithmthat seeks to decrease the number of nodes that are evaluated by theminimax algorithmin itssearch tree. It is an adversarial search algorithm used commonly for machine playing of two-playercombinatorial games(Tic-tac-toe,Chess,Connect 4, etc.). It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.[1]
John McCarthy during theDartmouth Workshopmet Alex Bernstein ofIBM, who was writing a chess program. McCarthy invented alpha–beta search and recommended it to him, but Bernstein was "unconvinced".[2]
Allen NewellandHerbert A. Simonwho used whatJohn McCarthycalls an "approximation"[3]in 1958 wrote that alpha–beta "appears to have been reinvented a number of times".[4]Arthur Samuelhad an early version for a checkers simulation. Richards, Timothy Hart, Michael Levin and/or Daniel Edwards also invented alpha–beta independently in theUnited States.[5]McCarthy proposed similar ideas during theDartmouth workshopin 1956 and suggested it to a group of his students includingAlan Kotokat MIT in 1961.[6]Alexander Brudnoindependently conceived the alpha–beta algorithm, publishing his results in 1963.[7]Donald Knuthand Ronald W. Moore refined the algorithm in 1975.[8][9]Judea Pearlproved its optimality in terms of the expected running time for trees with randomly assigned leaf values in two papers.[10][11]The optimality of the randomized version of alpha–beta was shown by Michael Saks and Avi Wigderson in 1986.[12]
Agame treecan represent many two-playerzero-sum games, such as chess, checkers, and reversi. Each node in the tree represents a possible situation in the game. Each terminal node (outcome) of a branch is assigned a numeric score that determines the value of the outcome to the player with the next move.[13]
The algorithm maintains two values, alpha and beta, which respectively represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of. Initially, alpha is negative infinity and beta is positive infinity, i.e. both players start with their worst possible score. Whenever the maximum score that the minimizing player (i.e. the "beta" player) is assured of becomes less than the minimum score that the maximizing player (i.e., the "alpha" player) is assured of (i.e. beta < alpha), the maximizing player need not consider further descendants of this node, as they will never be reached in the actual play.
To illustrate this with a real-life example, suppose somebody is playing chess, and it is their turn. Move "A" will improve the player's position. The player continues to look for moves to make sure a better one hasn't been missed. Move "B" is also a good move, but the player then realizes that it will allow the opponent to force checkmate in two moves. Thus, other outcomes from playing move B no longer need to be considered since the opponent can force a win. The maximum score that the opponent could force after move "B" is negative infinity: a loss for the player. This is less than the minimum position that was previously found; move "A" does not result in a forced loss in two moves.
The benefit of alpha–beta pruning lies in the fact that branches of the search tree can be eliminated.[13]This way, the search time can be limited to the 'more promising' subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to thebranch and boundclass of algorithms. The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node).
With an (average or constant)branching factorofb, and a search depth ofdplies, the maximum number of leaf node positions evaluated (when the move ordering ispessimal) isO(bd) – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves are always searched first), the number of leaf node positions evaluated is aboutO(b×1×b×1×...×b) for odd depth andO(b×1×b×1×...×1) for even depth, orO(bd/2)=O(bd){\displaystyle O(b^{d/2})=O({\sqrt {b^{d}}})}. In the latter case, where the ply of a search is even, the effective branching factor is reduced to itssquare root, or, equivalently, the search can go twice as deep with the same amount of computation.[14]The explanation ofb×1×b×1×... is that all the first player's moves must be studied to find the best one, but for each, only the second player's best move is needed to refute all but the first (and best) first player move—alpha–beta ensures no other second player moves need be considered. When nodes are considered in a random order (i.e., the algorithm randomizes), asymptotically,
the expected number of nodes evaluated in uniform trees with binary leaf-values isΘ(((b−1+b2+14b+1)/4)d){\displaystyle \Theta (((b-1+{\sqrt {b^{2}+14b+1}})/4)^{d})}.[12]For the same trees, when the values are assigned to the leaf values independently of each other and say zero and one are both equally probable, the expected number of nodes evaluated isΘ((b/2)d){\displaystyle \Theta ((b/2)^{d})}, which is much smaller than the work done by the randomized algorithm, mentioned above, and is again optimal for such random trees.[10]When the leaf values are chosen independently of each other but from the[0,1]{\displaystyle [0,1]}interval uniformly at random, the expected number of nodes evaluated increases toΘ(bd/log(d)){\displaystyle \Theta (b^{d/log(d)})}in thed→∞{\displaystyle d\to \infty }limit,[11]which is again optimal for this kind of random tree. Note that the actual work for "small" values ofd{\displaystyle d}is better approximated using0.925d0.747{\displaystyle 0.925d^{0.747}}.[11][10]
A chess program that searches four plies with an average of 36 branches per node evaluates more than one million terminal nodes. An optimal alpha-beta prune would eliminate all but about 2,000 terminal nodes, a reduction of 99.8%.[13]
Normally during alpha–beta, thesubtreesare temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try to find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as throughiterative deepening.
Additionally, this algorithm can be trivially modified to return an entireprincipal variationin addition to the score. Some more aggressive algorithms such asMTD(f)do not easily permit such a modification.
The pseudo-code for depth limited minimax with alpha–beta pruning is as follows:[15]
Implementations of alpha–beta pruning can often be delineated by whether they are "fail-soft," or "fail-hard". The pseudo-code illustrates the fail-soft variation. With fail-soft alpha–beta, the alphabeta function may return values (v) that exceed (v < α or v > β) the α and β bounds set by its function call arguments. In comparison, fail-hard alpha–beta limits its function return value into the inclusive range of α and β.
Further improvement can be achieved without sacrificing accuracy by using orderingheuristicsto search earlier parts of the tree that are likely to force alpha–beta cutoffs. For example, in chess, moves that capture pieces may be examined before moves that do not, and moves that have scored highly inearlier passesthrough the game-tree analysis may be evaluated before others. Another common, and very cheap, heuristic is thekiller heuristic, where the last move that caused a beta-cutoff at the same tree level in the tree search is always examined first. This idea can also be generalized into a set ofrefutation tables.
Alpha–beta search can be made even faster by considering only a narrow search window (generally determined by guesswork based on experience). This is known as anaspiration window. In the extreme case, the search is performed with alpha and beta equal; a technique known aszero-window search,null-window search, orscout search. This is particularly useful for win/loss searches near the end of a game where the extra depth gained from the narrow window and a simple win/loss evaluation function may lead to a conclusive result. If an aspiration search fails, it is straightforward to detect whether it failedhigh(high edge of window was too low) orlow(lower edge of window was too high). This gives information about what window values might be useful in a re-search of the position.
Over time, other improvements have been suggested, and indeed the Falphabeta (fail-soft alpha–beta) idea of John Fishburn is nearly universal and is already incorporated above in a slightly modified form. Fishburn also suggested a combination of the killer heuristic and zero-window search under the name Lalphabeta ("last move with minimal window alpha–beta search").
Since theminimaxalgorithm and its variants are inherentlydepth-first, a strategy such asiterative deepeningis usually used in conjunction with alpha–beta so that a reasonably good move can be returned even if the algorithm is interrupted before it has finished execution. Another advantage of using iterative deepening is that searches at shallower depths give move-ordering hints, as well as shallow alpha and beta estimates, that both can help produce cutoffs for higher depth searches much earlier than would otherwise be possible.
Algorithms likeSSS*, on the other hand, use thebest-firststrategy. This can potentially make them more time-efficient, but typically at a heavy cost in space-efficiency.[16]
|
https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning
|
Neural coding(orneural representation) is aneurosciencefield concerned with characterising the hypothetical relationship between thestimulusand the neuronal responses, and the relationship among theelectrical activitiesof the neurons in theensemble.[1][2]Based on the theory that
sensory and other information is represented in thebrainbynetworks of neurons, it is believed thatneuronscan encode bothdigitalandanaloginformation.[3]
Neurons have an ability uncommon among the cells of the body to propagate signals rapidly over large distances by generating characteristic electrical pulses calledaction potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such aslight,sound,taste,smellandtouch. Information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain. Beyond this, specialized neurons, such as those of the retina, can communicate more information throughgraded potentials. These differ from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons' output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials is higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons.[4]
Although action potentials can vary somewhat in duration,amplitudeand shape, they are typically treated as identical stereotyped events in neural coding studies. If thebrief durationof an action potential (about 1 ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series ofall-or-nonepoint events in time.[5]The lengths of interspike intervals (ISIs) between two successive spikes in a spike train often vary, apparently randomly.[6]The study of neural coding involves measuring and characterizing how stimulus attributes, such as light or sound intensity, or motor actions, such as the direction of an arm movement, are represented by neuron action potentials or spikes. In order to describe and analyze neuronal firing,statistical methodsand methods ofprobability theoryand stochasticpoint processeshave been widely applied.
With the development of large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and have already provided the first glimpse into the real-time neural code as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation.[7][8][9]Neuroscientists have initiated several large-scale brain decoding projects.[10][11]
The link between stimulus and response can be studied from two opposite points of view. Neural encoding refers to the map from stimulus to response. The main focus is to understand how neurons respond to a wide variety of stimuli, and to construct models that attempt to predict responses to other stimuli.Neural decodingrefers to the reverse map, from response to stimulus, and the challenge is to reconstruct a stimulus, or certain aspects of that stimulus, from the spike sequences it evokes.[citation needed]
A sequence, or 'train', of spikes may contain information based on different coding schemes. In some neurons the strength with which a postsynaptic partner responds may depend solely on the 'firing rate', the average number of spikes per unit time (a 'rate code'). At the other end, a complex 'temporal code' is based on the precise timing of single spikes. They may be locked to an external stimulus such as in the visual[12]andauditory systemor be generated intrinsically by the neural circuitry.[13]
Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean.[14]
The rate coding model ofneuronalfiring communication states that as the intensity of a stimulus increases, thefrequencyor rate ofaction potentials, or "spike firing", increases. Rate coding is sometimes called frequency coding.
Rate coding is a traditional coding scheme, assuming that most, if not all, information about the stimulus is contained in the firing rate of the neuron. Because the sequence of action potentials generated by a given stimulus varies from trial to trial, neuronal responses are typically treated statistically or probabilistically. They may be characterized by firing rates, rather than as specific spike sequences. In most sensory systems, the firing rate increases, generally non-linearly, with increasing stimulus intensity.[15]Under a rate coding assumption, any information possibly encoded in the temporal structure of the spike train is ignored. Consequently, rate coding is inefficient but highly robust with respect to the ISI 'noise'.[6]
During rate coding, precisely calculating firing rate is very important. In fact, the term "firing rate" has a few different definitions, which refer to different averaging procedures, such as anaverage over time(rate as a single-neuron spike count) or anaverage over several repetitions(rate of PSTH) of experiment.
In rate coding, learning is based on activity-dependent synaptic weight modifications.
Rate coding was originally shown byEdgar AdrianandYngve Zottermanin 1926.[16]In this simple experiment different weights were hung from amuscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. From these original experiments, Adrian and Zotterman concluded that action potentials were unitary events, and that the frequency of events, and not individual event magnitude, was the basis for most inter-neuronal communication.
In the following decades, measurement of firing rates became a standard tool for describing the properties of all types of sensory orcorticalneurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes. During recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity.[6]
The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial.[14]The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter (Chapter 1.5in the textbook 'Spiking Neuron Models'[14]).
The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of theorganism— and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans performsaccades, rapid changes of the direction of gaze. The image projected onto the retinalphotoreceptorschanges therefore every few hundred milliseconds (Chapter 1.5in[14])
Despite its shortcomings, the concept of a spike-count rate code is widely used not only in experiments, but also in models ofneural networks. It has led to the idea that a neuron transforms information about a single input variable (the stimulus strength) into a single continuous output variable (the firing rate).
There is a growing body of evidence that inPurkinje neurons, at least, information is not simply encoded in firing but also in the timing and duration of non-firing, quiescent periods.[17][18]There is also evidence from retinal cells, that information is encoded not only in the firing rate but also in spike timing.[19]More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow.[14]
The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval.[14]It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in aPeri-Stimulus-Time Histogram(PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5in[14]).
For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also thefractionof trials on which a spike occurred between those times. Equivalently, r(t)Δt is theprobabilitythat a spike occurs during this time interval.
As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response.[14]
Nevertheless, the experimental time-dependent firing rate measure can make sense, if there are large populations of independent neurons that receive the same stimulus. Instead of recording from a population of N neurons in a single run, it is experimentally easier to record from a single neuron and average over N repeated runs. Thus, the time-dependent firing rate coding relies on the implicit assumption that there are always populations of neurons.
When precise spike timing or high-frequency firing-ratefluctuationsare found to carry information, the neural code is often identified as a temporal code.[14][20]A number of studies have found that the temporal resolution of the neural code is on a millisecond time scale, indicating that precise spike timing is a significant element in neural coding.[3][21][19]Such codes, that communicate via the time between spikes are also referred to as interpulse interval codes, and have been supported by recent studies.[22]
Neurons exhibit high-frequency fluctuations of firing-rates which could be noise or could carry information. Rate coding models suggest that these irregularities are noise, while temporal coding models suggest that they encode information. If the nervous system only used rate codes to convey information, a more consistent, regular firing rate would have been evolutionarily advantageous, and neurons would have utilized this code over other less robust options.[23]Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing. To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences, at 6 spikes/10 ms.[24]
Until recently, scientists had put the most emphasis on rate encoding as an explanation forpost-synaptic potentialpatterns. However, functions of the brain are more temporally precise than the use of only rate encoding seems to allow.[19]In other words, essential information could be lost due to the inability of the rate code to capture all the available information of the spike train. In addition, responses are different enough between similar (but not identical) stimuli to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code.[25]
Temporal codes (also calledspike codes[14]), employ those features of the spiking activity that cannot be described by the firing rate. For example,time-to-first-spikeafter the stimulus onset,phase-of-firingwith respect to background oscillations, characteristics based on the second and higher statisticalmomentsof the ISIprobability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes.[26]As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to anongoing brain oscillation(phase of firing).[3][6]One way in which temporal codes are decoded, in presence ofneural oscillations, is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing thepost-synaptic neuron.[27]
The temporal structure of a spike train or firing rate evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes[28](and rapidly changing firing rates in PSTHs) no matter what neural coding strategy is being used. Temporal coding in the narrow sense refers to temporal precision in the response that does not arise solely from the dynamics of the stimulus, but that nevertheless relates to properties of the stimulus. The interplay between stimulus and encoding dynamics makes the identification of a temporal code difficult.
In temporal coding, learning can be explained by activity-dependent synaptic delay modifications.[29]The modifications can themselves depend not only on spike rates (rate coding) but also on spike timing patterns (temporal coding), i.e., can be a special case ofspike-timing-dependent plasticity.[30]
The issue of temporal coding is distinct and independent from the issue of independent-spike coding. If each spike is independent of all the other spikes in the train, the temporal character of the neural code is determined by the behavior of time-dependent firing rate r(t). If r(t) varies slowly with time, the code is typically called a rate code, and if it varies rapidly, the code is called temporal.
For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important forsound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work.[24]
To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike.[31]This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations.[32]In theprimary visual cortexof macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions.[33]
The mammaliangustatory systemis useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism.[34]Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation.[35]
Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain.[25]
As with the visual system, inmitral/tufted cellsin theolfactory bulbof mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. This type of extra information could help in recognizing a certain odor, but is not completely necessary, as average spike count over the course of the animal's sniffing was also a good identifier.[36]Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system.[24]
The specificity of temporal coding requires highly refined technology to measure informative, reliable, experimental data. Advances made inoptogeneticsallow neurologists to control spikes in individual neurons, offering electrical and spatial single-cell resolution. For example, blue light causes the light-gated ion channelchannelrhodopsinto open, depolarizing the cell and producing a spike. When blue light is not sensed by the cell, the channel closes, and the neuron ceases to spike. The pattern of the spikes matches the pattern of the blue light stimuli. By inserting channelrhodopsin gene sequences into mouse DNA, researchers can control spikes and therefore certain behaviors of the mouse (e.g., making the mouse turn left).[37]Researchers, through optogenetics, have the tools to effect different temporal codes in a neuron while maintaining the same mean firing rate, and thereby can test whether or not temporal coding occurs in specific neural circuits.[38]
Optogenetic technology also has the potential to enable the correction of spike abnormalities at the root of several neurological and psychological disorders.[38]If neurons do encode information in individual spike timing patterns, key signals could be missed by attempting to crack the code while looking only at mean firing rates.[24]Understanding any temporally encoded aspects of the neural code and replicating these sequences in neurons could allow for greater control and treatment of neurological disorders such asdepression,schizophrenia, andParkinson's disease. Regulation of spike intervals in single cells more precisely controls brain activity than the addition of pharmacological agents intravenously.[37]
Phase-of-firing code is a neural coding scheme that combines thespikecount code with a time reference based onoscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low[39]or high frequencies.[40]
It has been shown that neurons in some cortical sensory areas encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count.[39][41]Thelocal field potentialsignals reflect population (network) oscillations. The phase-of-firing code is often categorized as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on thephase precessionphenomena observed in place cells of thehippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence.[42]
Phase code has been shown in visual cortex to involve alsohigh-frequency oscillations.[42]Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms.[42]
Population coding is a method to represent stimuli by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis.[43]Experimental studies have revealed that this coding paradigm is widely used in the sensory and motor areas of the brain.
For example, in the visual areamedial temporal(MT), neurons are tuned to the direction of object motion.[44]In response to an object moving in a particular direction, many neurons in MT fire with a noise-corrupted andbell-shapedactivity pattern across the population. The moving direction of the object is retrieved from the population activity, to be immune from the fluctuation existing in a single neuron's signal. When monkeys are trained to move a joystick towards a lit target, a single neuron will fire for multiple target directions. However it fires the fastest for one direction and more slowly depending on how close the target was to the neuron's "preferred" direction.[45][46]If each neuron represents movement in its preferred direction, and the vector sum of all neurons is calculated (each neuron has a firing rate and a preferred direction), the sum points in the direction of motion. In this manner, the population of neurons codes the signal for the motion.[citation needed]This particular population code is referred to aspopulation vectorcoding.
Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels;[47]ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch,[48]and formant representations in consonant-vowel syllables.[49]The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding.
Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronalvariabilityand the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously.[50]Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus.
Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value.[citation needed]It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method ofmaximum likelihoodbased on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations,[51]or even more detailed dependencies such as higher ordermaximum entropy models,[52]orcopulas.[53]
The correlation coding model ofneuronalfiring claims that correlations betweenaction potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the totalmutual informationpresent in the two spike trains about a stimulus feature.[54]However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign.[55]Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.[56]
The independent-spike coding model ofneuronalfiring claims that each individualaction potential, or "spike", is independent of each other spike within thespike train.[20][57]
A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate.
This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as ingrid cellsthat represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.[58]
Dimensionality reductionandtopological data analysis, have revealed that the population code is constrained to low-dimensional manifolds,[59]sometimes also referred to asattractors. The position along the neural manifold correlates to certain behavioral conditions like head direction neurons in the anterodorsal thalamic nucleus forming a ring structure,[60]grid cellsencoding spatial position inentorhinal cortexalong the surface of atorus,[61]ormotor cortexneurons encoding hand movements[62]and preparatory activity.[63]The low-dimensional manifolds are known to change in a state dependent manner, such as eye closure in thevisual cortex,[64]or breathing behavior in theventral respiratory column.[65]
The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known.
As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produceswavelet-like oriented filters that resemble thereceptive fieldsof simple cells in the visual cortex.[66]The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.[67]
Given a potentially large set of input patterns, sparse coding algorithms (e.g.sparse autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
Most models of sparse coding are based on the linear generative model.[68]In this model, the symbols are combined in alinear fashionto approximate the input.
More formally, given a k-dimensional set of real-numbered input vectorsξ→∈Rk{\displaystyle {\vec {\xi }}\in \mathbb {R} ^{k}}, the goal of sparse coding is to determine n k-dimensionalbasis vectorsb1→,…,bn→∈Rk{\displaystyle {\vec {b_{1}}},\ldots ,{\vec {b_{n}}}\in \mathbb {R} ^{k}}, corresponding to neuronal receptive fields, along with asparsen-dimensional vector of weights or coefficientss→∈Rn{\displaystyle {\vec {s}}\in \mathbb {R} ^{n}}for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector:ξ→≈∑j=1nsjb→j{\displaystyle {\vec {\xi }}\approx \sum _{j=1}^{n}s_{j}{\vec {b}}_{j}}.[69]
The codings generated by algorithms implementing a linear generative model can be classified into codings withsoft sparsenessand those withhard sparseness.[68]These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smoothGaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values,noorhardly anysmall absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.[68]
Another measure of coding is whether it iscritically completeorovercomplete. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding isovercomplete. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise.[70]The human primaryvisual cortexis estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.[68]
Other models are based onmatching pursuit, asparse approximationalgorithm which finds the "best matching" projections of multidimensional data, anddictionary learning, a representation learning method which aims to find asparse matrixrepresentation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.[71][72][73]
Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specificassociative memoriesin which only a few neurons out of apopulationrespond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.
Theoretical work onsparse distributed memoryhas suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations.[74]Experimentally, sparse representations of sensory information have been observed in many systems, including vision,[75]audition,[76]touch,[77]and olfaction.[78]However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.
In theDrosophilaolfactory system, sparse odor coding by theKenyon cellsof themushroom bodyis thought to generate a large number of precisely addressable locations for the storage of odor-specific memories.[79]Sparseness is controlled by a negative feedback circuit between Kenyon cells andGABAergicanterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.[80]
|
https://en.wikipedia.org/wiki/Neural_coding
|
TheSignal Protocol(formerly known as theTextSecure Protocol) is a non-federatedcryptographic protocolthat providesend-to-end encryptionfor voice andinstant messagingconversations.[2]The protocol was developed byOpen Whisper Systemsin 2013[2]and was introduced in theopen-sourceTextSecureapp, which later becameSignal. Severalclosed-sourceapplications have implemented the protocol, such asWhatsApp, which is said to encrypt the conversations of "more than a billion people worldwide"[3]orGooglewho provides end-to-end encryption by default to allRCS-based conversations between users of theirGoogle Messagesapp for one-to-one conversations.[4]Facebook Messengeralso say they offer the protocol for optional "Secret Conversations", as doesSkypefor its "Private Conversations".
The protocol combines theDouble Ratchet Algorithm, prekeys (i.e., one-time ephemeral public keys that have been uploaded in advance to a central server), and a tripleelliptic-curve Diffie–Hellman(3-DH) handshake,[5]and usesCurve25519,AES-256, andHMAC-SHA256asprimitives.[6]
The development of the Signal Protocol was started by Trevor Perrin andMoxie Marlinspike(Open Whisper Systems) in 2013. The first version of the protocol, TextSecure v1, was based onOff-the-record messaging(OTR).[7][8]
On 24 February 2014, Open Whisper Systems introduced TextSecure v2,[9]which migrated to the Axolotl Ratchet.[7][10]The design of the Axolotl Ratchet is based on the ephemeral key exchange that was introduced by OTR and combines it with a symmetric-key ratchet modeled after theSilent Circle Instant Message Protocol(SCIMP).[1]It brought about support forasynchronous communication("offline messages") as its major new feature, as well as better resilience with distorted order of messages and simpler support for conversations with multiple participants.[11]The Axolotl Ratchet was named after the critically endangered aquatic salamanderAxolotl, which has extraordinary self-healing capabilities. The developers refer to the algorithm as self-healing because it automatically disables an attacker from accessing thecleartextof later messages after having compromised asession key.[1]
The third version of the protocol, TextSecure v3, made some changes to the cryptographic primitives and the wire protocol.[7]In October 2014, researchers fromRuhr University Bochumpublished an analysis of TextSecure v3.[6][7]Among other findings, they presented anunknown key-share attackon the protocol, but in general, they found that it was secure.[12]
In March 2016, the developers renamed the protocol as the Signal Protocol. They also renamed the Axolotl Ratchet as the Double Ratchet algorithm to better differentiate between the ratchet and the full protocol[13]because some had used the name Axolotl when referring to the full protocol.[14][13]
As of October 2016[update], the Signal Protocol is based on TextSecure v3, but with additional cryptographic changes.[7]In October 2016, researchers from the UK'sUniversity of Oxford, Australia'sQueensland University of Technology, and Canada'sMcMaster Universitypublished a formal analysis of the protocol, concluding that the protocol was cryptographically sound.[15][16]
Another audit of the protocol was published in 2017.[17]
The protocol provides confidentiality, integrity,authentication, participant consistency, destination validation,forward secrecy, post-compromise security (aka future secrecy), causality preservation, message unlinkability,message repudiation, participation repudiation, and asynchronicity.[18]It does not provide anonymity preservation and requires servers for the relaying of messages and storing of public key material.[18]
The Signal Protocol also supports end-to-end encrypted group chats. The group chat protocol is a combination of a pairwisedouble ratchetandmulticast encryption.[18]In addition to the properties provided by the one-to-one protocol, the group chat protocol provides speaker consistency, out-of-order resilience, dropped message resilience, computational equality, trust equality, subgroup messaging, as well as contractible and expandable membership.[18]
For authentication, users can manually comparepublic key fingerprintsthrough an outside channel.[19]This makes it possible for users to verify each other's identities and avoid aman-in-the-middle attack.[19]An implementation can also choose to employ atrust on first usemechanism in order to notify users if a correspondent's key changes.[19]
The Signal Protocol does not prevent a company from retaining information about when and with whom users communicate.[20][21]There can therefore be differences in how messaging service providers choose to handle this information. Signal'sprivacy policystates that recipients' identifiers are only kept on the Signal servers as long as necessary in order to transmit each message.[22]In June 2016, Moxie Marlinspike toldThe Intercept: "the closest piece of information to metadata that the Signal server stores is the last time each user connected to the server, and the precision of this information is reduced to the day, rather than the hour, minute, and second."[21]
In October 2018, Signal Messenger announced that they had implemented a "sealed sender" feature into Signal, which reduces the amount of metadata that the Signal servers have access to by concealing the sender's identifier.[23][24]The sender's identity is conveyed to the recipient in each message, but is encrypted with a key that the server does not have.[24]This is done automatically if the sender is in the recipient's contacts or has access to their Signal Profile.[24]Users can also enable an option to receive "sealed sender" messages from non-contacts and people who do not have access to their Signal Profile.[24]A contemporaneous wiretap of the user's device and/or the Signal servers may still reveal that the device's IP address accessed a Signal server to send or receive messages at certain times.[23]
Open Whisper Systems first introduced the protocol in applicationTextSecure. They later merged an encrypted voice call application namedRedPhoneinto TextSecure and renamed itSignal.
In November 2014, Open Whisper Systems announced a partnership withWhatsAppto provide end-to-end encryption by incorporating the Signal Protocol into each WhatsApp client platform.[25]Open Whisper Systems said that they had already incorporated the protocol into the latest WhatsApp client forAndroidand that support for other clients, group/media messages, and key verification would be coming soon after.[26]On April 5, 2016, WhatsApp and Open Whisper Systems announced that they had finished adding end-to-end encryption to "every form of communication" on WhatsApp, and that users could now verify each other's keys.[27][28]In February 2017, WhatsApp announced a new feature, WhatsApp Status, which uses the Signal Protocol to secure its contents.[29]In October 2016, WhatsApp's parent companyFacebookalso deployed an optional mode called Secret Conversations inFacebook Messengerwhich provides end-to-end encryption using an implementation of the Signal Protocol.[30][31][32][33]
In September 2015,G Data Softwarelaunched a new messaging app called Secure Chat which used the Signal Protocol.[34][35]G Data discontinued the service in May 2018.[36]
In September 2016,Googlelaunched a new messaging app calledAllo, which featured an optional "incognito mode" that used the Signal Protocol for end-to-end encryption.[37][38]In March 2019, Google discontinued Allo in favor of theirGoogle Messagesapp on Android.[39][40]In November 2020, Google announced that they would be using the Signal Protocol to provide end-to-end encryption by default to allRCS-based conversations between users of theirGoogle Messagesapp, starting with one-to-one conversations.[4][41]
In January 2018, Open Whisper Systems andMicrosoftannounced the addition of Signal Protocol support to an optionalSkypemode called Private Conversations.[42][43]
The Signal Protocol has had an influence on other cryptographic protocols. In May 2016,Vibersaid that their encryption protocol is a custom implementation that "uses the same concepts" as the Signal Protocol.[44][45]Forsta's developers have said that their app uses a custom implementation of the Signal Protocol.[46][47][independent source needed]
TheDouble Ratchet Algorithmthat was introduced as part of the Signal Protocol has also been adopted by other protocols.OMEMOis an XMPP Extension Protocol (XEP) that was introduced in theConversationsmessaging app and approved by theXMPP Standards Foundation(XSF) in December 2016 as XEP-0384.[48][2]Matrixis an open communications protocol that includes Olm, a library that provides optional end-to-end encryption on a room-by-room basis via a Double Ratchet Algorithm implementation.[2]The developers ofWirehave said that their app uses a custom implementation of the Double Ratchet Algorithm.[49][50][51]
Messaging Layer Security, anIETFproposal, usesAsynchronous ratcheting treesto efficiently improve upon security guarantees over Signal'sDouble Ratchet.[52]
Signal Messenger maintains areference implementationof the Signal Protocollibrarywritten inRustunder theAGPLv3license onGitHub. There are bindings to Swift, Java, TypeScript, C, and other languages that use the reference Rust implementation.
Signal maintained the following deprecated libraries:
There also exist alternative libraries written by third-parties in other languages, such asTypeScript.[53]
|
https://en.wikipedia.org/wiki/Signal_Protocol
|
Demographic gravitationis a concept of "social physics",[1]introduced byPrinceton UniversityastrophysicistJohn Quincy Stewart[2]in 1947.[3]It is an attempt to use equations and notions ofclassical physics, such asgravity, to seek simplified insights and even laws ofdemographicbehaviour for large numbers of human beings. A basic conception within it is that large numbers of people, in a city for example, actually behave as an attractive force for other people to migrate there. It has been related[4][5]to W. J.Reilly's law of retail gravitation,[6][7]George Kingsley Zipf's Demographic Energy,[8]and to the theory oftrip distribution through gravity models.
Writing in the journalSociometry, Stewart set out an "agenda for social physics." Comparing themicroscopicversusmacroscopicviewpoints in the methodology of formulatingphysical laws, he made an analogy with thesocial sciences:
Fortunately for physics, the macroscopic approach was the commonsense one, and the early investigators –Boyle, Charles,Gay-Lussac– were able to establish the laws of gases. The situation with respect to "social physics" is reversed...
If Robert Boyle had taken the attitude of many social scientists, he would not have been willing to measure the pressure and volume of a sample of air until an encyclopedic history of its molecules had been compiled. Boyle did not even know that air contained argon and helium but he found a very important law.[3]
Stewart proceeded to applyNewtonianformulae of gravitation to that of "the average interrelations of people" on a wide geographic scale, elucidating such notions as "the demographic force of attraction," demographic energy, force, potential and gradient.[3]
The following are some of the key equations (with plain English paraphrases) from his article insociometry:
(Demographic force = (population 1 multiplied by population 2) divided by (distance squared))
(Demographic energy = (population 1, multiplied by population 2) divided by distance; this is also Zipf's determinant)
(Demographic potential of population at point 1 = population at point 2, divided by distance)
(Demographic potential in general = population divided by distance, in persons per mile)
(Demographic gradient = persons per (i.e. divided by) square mile)
The potential of population at any point is equivalent to the measure of proximity of people at that point (this also has relevance toGeorgisteconomic renttheoryEconomic rent#Land rent).
For comparison, Reilly's retail gravity equilibrium (or Balance/Break Point) is paraphrased as:
(Population 1 divided by (distance to balance, squared) = Population 2 / (distance to balance, squared))
Recently, a stochastic version has been proposed[9]according to which the probabilitypj{\displaystyle p_{j}}of a sitej{\displaystyle j}to become urban is given by
wherewk=1{\displaystyle w_{k}=1}for urban sites andwk=0{\displaystyle w_{k}=0}otherwise,dj,k{\displaystyle d_{j,k}}is the distance between sitesj{\displaystyle j}andk{\displaystyle k}, andC{\displaystyle C}controls the overall growth-rate. The parameterγ{\displaystyle \gamma }determines the degree of compactness.
|
https://en.wikipedia.org/wiki/Demographic_gravitation
|
Incomputing, awordis anyprocessordesign's natural unit of data. A word is a fixed-sizeddatumhandled as a unit by theinstruction setor the hardware of the processor. The number ofbitsor digits[a]in a word (theword size,word width, orword length) is an important characteristic of any specific processor design orcomputer architecture.
The size of a word is reflected in many aspects of a computer's structure and operation; the majority of theregistersin a processor are usually word-sized and the largest datum that can be transferred to and from theworking memoryin a single operation is a word in many (not all) architectures. The largest possibleaddresssize, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).
Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes usesmetric prefixescorrectly, sometimes with rounding, e.g.,65 kilowords(kW) meaning for 65536 words, and sometimes uses them incorrectly, withkilowords(kW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of theIECbinary prefixes.
Several of the earliest computers (and a few modern as well) usebinary-coded decimalrather than plainbinary, typically having a word size of 10 or 12decimaldigits, and some earlydecimal computershave no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common onmainframe computers. The introduction ofASCIIled to the move to systems with word lengths that were a multiple of 8-bits, with 16-bit machines being popular in the 1970s before the move to modern processors with 32 or 64 bits.[1]Special-purpose designs likedigital signal processors, may have any word length from 4 to 80 bits.[1]
The size of a word can sometimes differ from the expected due tobackward compatibilitywith earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (seeSize familiesbelow).
Depending on how a computer is organized, word-size units may be used for:
When a computer architecture is designed, the choice of a word size is of substantial importance. There are design considerations which encourage particular bit-group sizes for particular uses (e.g. for addresses), and these considerations point to different sizes for different uses. However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to a primary size. That preferred size becomes the word size of the architecture.
Charactersize was in the past (pre-variable-sizedcharacter encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the36-bit word, which is also a good size for the numeric properties of a floating point format.
After the introduction of theIBMSystem/360design, which uses eight-bit characters and supports lower-case letters, the standard size of a character (or more accurately, abyte) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used.
Early machine designs included some that used what is often termed avariable word length. In this type of organization, an operand has no fixed length. Depending on the machine and the instruction, the length might be denoted by a count field, by a delimiting character, or by an additional bit called, e.g., flag, orword mark. Such machines often usebinary-coded decimalin 4-bit digits, or in 6-bit characters, for numbers. This class of machines includes theIBM 702,IBM 705,IBM 7080,IBM 7010,UNIVAC 1050,IBM 1401,IBM 1620, andRCA301.
Most of these machines work on one unit of memory at a time and since each instruction or datum is several units long, each instruction takes several cycles just to access memory. These machines are often quite slow because of this. For example, instruction fetches on anIBM 1620 Model Itake 8 cycles (160 μs) just to read the 12 digits of the instruction (theModel IIreduced this to 6 cycles, or 4 cycles if the instruction did not need both address fields). Instruction execution takes a variable number of cycles, depending on the size of the operands.
The memory model of an architecture is strongly influenced by the word size. In particular, the resolution of a memory address, that is, the smallest unit that can be designated by an address, has often been chosen to be the word. In this approach, theword-addressablemachine approach, address values which differ by one designate adjacent memory words. This is natural in machines which deal almost always in word (or multiple-word) units, and has the advantage of allowing instructions to use minimally sized fields to contain addresses, which can permit a smaller instruction size or a larger variety of instructions.
When byte processing is to be a significant part of the workload, it is usually more advantageous to use thebyte, rather than the word, as the unit of address resolution. Address values which differ by one designate adjacent bytes in memory. This allows an arbitrary character within a character string to be addressed straightforwardly. A word can still be addressed, but the address to be used requires a few more bits than the word-resolution alternative. The word size needs to be an integer multiple of the character size in this organization. This addressing approach was used in the IBM 360, and has been the most common approach in machines designed since then.
When the workload involves processing fields of different sizes, it can be advantageous to address to the bit. Machines with bit addressing may have some instructions that use a programmer-defined byte size and other instructions that operate on fixed data sizes. As an example, on theIBM 7030[4]("Stretch"), a floating point instruction can only address words while an integer arithmetic instruction can specify a field length of 1-64 bits, a byte size of 1-8 bits and an accumulator offset of 0-127 bits.
In abyte-addressablemachine with storage-to-storage (SS) instructions, there are typically move instructions to copy one or multiple bytes from one arbitrary location to another. In a byte-oriented (byte-addressable) machine without SS instructions, moving a single byte from one arbitrary location to another is typically:
Individual bytes can be accessed on a word-oriented machine in one of two ways. Bytes can be manipulated by a combination of shift and mask operations in registers. Moving a single byte from one arbitrary location to another may require the equivalent of the following:
Alternatively many word-oriented machines implement byte operations with instructions using specialbyte pointersin registers or memory. For example, thePDP-10byte pointer contained the size of the byte in bits (allowing different-sized bytes to be accessed), the bit position of the byte within the word, and the word address of the data. Instructions could automatically adjust the pointer to the next byte on, for example, load and deposit (store) operations.
Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually apower of twomultiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the memory address offset of the item then requires only ashiftoperation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte.
As computer designs have grown more complex, the central importance of a single word size to an architecture has decreased. Although more capable hardware can use a wider variety of sizes of data, market forces exert pressure to maintainbackward compatibilitywhile extending processor capability. As a result, what might have been the central word size in a fresh design has to coexist as an alternative size to the original word size in a backward compatible design. The original word size remains available in future designs, forming the basis of a size family.
In the mid-1970s,DECdesigned theVAXto be a 32-bit successor of the 16-bitPDP-11. They usedwordfor a 16-bit quantity, whilelongwordreferred to a 32-bit quantity; this terminology is the same as the terminology used for the PDP-11. This was in contrast to earlier machines, where the natural unit of addressing memory would be called aword, while a quantity that is one half a word would be called ahalfword. In fitting with this scheme, a VAXquadwordis 64 bits. They continued this 16-bit word/32-bit longword/64-bit quadword terminology with the 64-bitAlpha.
Another example is thex86family, of which processors of three different word lengths (16-bit, later 32- and 64-bit) have been released, whilewordcontinues to designate a 16-bit quantity. As software is routinelyportedfrom one word-length to the next, someAPIsand documentation define or refer to an older (and thus shorter) word-length than the full word length on the CPU that software may be compiled for. Also, similar to how bytes are used for small numbers in many programs, a shorter word (16 or 32 bits) may be used in contexts where the range of a wider word is not needed (especially where this can save considerable stack space or cache memory space). For example, Microsoft'sWindows APImaintains theprogramming languagedefinition ofWORDas 16 bits, despite the fact that the API may be used on a 32- or 64-bit x86 processor, where the standard word size would be 32 or 64 bits, respectively. Data structures containing such different sized words refer to them as:
A similar phenomenon has developed inIntel'sx86assembly language– because of the support for various sizes (and backward compatibility) in the instruction set, some instruction mnemonics carry "d" or "q" identifiers denoting "double-", "quad-" or "double-quad-", which are in terms of the architecture's original 16-bit word size.
An example with a different word size is theIBMSystem/360family. In theSystem/360 architecture,System/370 architectureandSystem/390architecture, there are 8-bitbytes, 16-bithalfwords, 32-bitwords and 64-bitdoublewords. Thez/Architecture, which is the 64-bit member of that architecture family, continues to refer to 16-bithalfwords, 32-bitwords, and 64-bitdoublewords, and additionally features 128-bitquadwords.
In general, new processors must use the same data word lengths and virtual address widths as an older processor to havebinary compatibilitywith that older processor.
Often carefully written source code – written withsource-code compatibilityandsoftware portabilityin mind – can be recompiled to run on a variety of processors, even ones with different data word lengths or different address widths or both.
[8][9]
|
https://en.wikipedia.org/wiki/Word_(computer_architecture)
|
Incomputer science,resource starvationis a problem encountered inconcurrent computingwhere aprocessis perpetually denied necessaryresourcesto process its work.[1]Starvation may be caused by errors in a scheduling ormutual exclusionalgorithm, but can also be caused byresource leaks, and can be intentionally caused via adenial-of-service attacksuch as afork bomb.
When starvation is impossible in aconcurrent algorithm, the algorithm is calledstarvation-free,lockout-freed[2]or said to havefinite bypass.[3]This property is an instance ofliveness, and is one of the two requirements for any mutual exclusion algorithm; the other beingcorrectness. The name "finite bypass" means that any process (concurrent part) of the algorithm is bypassed at most a finite number times before being allowed access to theshared resource.[3]
Starvation is usually caused by an overly simplisticscheduling algorithm. For example, if a (poorly designed)multi-tasking systemalways switches between the first two tasks while a third never gets to run, then the third task is being starved ofCPU time. The scheduling algorithm, which is part of thekernel, is supposed to allocate resources equitably; that is, the algorithm should allocate resources so that no process perpetually lacks necessary resources.
Many operating system schedulers employ the concept of process priority. A high priority process A will run before a low priority process B. If the high priority process (process A) blocks and never yields, the low priority process (B) will (in some systems) never be scheduled—it will experience starvation. If there is an even higher priority process X, which is dependent on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called apriority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation.
In computer networks, especially wireless networks,scheduling algorithmsmay suffer from scheduling starvation. An example ismaximum throughput scheduling.
Starvation is normally caused bydeadlockin that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into acritical sectionand picks one arbitrarily is deadlock-free, but not starvation-free.[3]
A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses theagingtechnique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.[4]
|
https://en.wikipedia.org/wiki/Starvation_(computer_science)
|
Adictionary coder, also sometimes known as asubstitution coder, is a class oflossless data compressionalgorithms which operate by searching for matches between the text to be compressed and a set ofstringscontained in adata structure(called the 'dictionary') maintained by the encoder. When the encoder finds such a match, it substitutes a reference to the string's position in the data structure.
Some dictionary coders use a 'static dictionary', one whose full set of strings is determined before coding begins and does not change during the coding process. This approach is most often used when the message or set of messages to be encoded is fixed and large; for instance, anapplicationthat stores the contents of a book in the limited storage space of aPDAgenerally builds a static dictionary from aconcordanceof the text and then uses that dictionary to compress the verses. This scheme of usingHuffman codingto represent indices into a concordance has been called "Huffword".[1]
In a related and more general method, a dictionary is built from redundancy extracted from a data environment (various input streams) which dictionary is then used statically to compress a further input stream. For example, a dictionary is built from old English texts then is used to compress a book.[2]
More common are methods where the dictionary starts in some predetermined state but the contents change during the encoding process, based on the data that has already been encoded. Both theLZ77andLZ78algorithms work on this principle. In LZ77, acircular buffercalled the "sliding window" holds the lastNbytes of data processed. This window serves as the dictionary, effectively storingeverysubstring that has appeared in the pastNbytes as dictionary entries. Instead of a single index identifying a dictionary entry, two values are needed: thelength, indicating the length of the matched text, and theoffset(also called thedistance), indicating that the match is found in the sliding window startingoffsetbytes before the current text.
LZ78 uses a more explicit dictionary structure; at the beginning of the encoding process, the dictionary is empty. An index value of zero is used to represent the end of a string, so the first index of the dictionary is one. At each step of the encoding process, if there is no match, then the last matching index (or zero) and character are both added to the dictionary and output to the compressed stream. If there is a match, then the working index is updated to the matching index, and nothing is output.
LZWis similar to LZ78, but, the dictionary is initialized to all possible symbols. The typical implementation works with 8 bit symbols, so the dictionary "codes" for hex 00 to hex FF (decimal 255) are pre-defined. Dictionary entries would be added starting with code value hex 100. Unlike LZ78, if a match is not found (or if the end of data), then only the dictionary code is output. This creates a potential issue since the decoder output is one step behind the dictionary. Refer toLZWfor how this is handled. Enhancements to LZW include handing symbol sizes other than 8 bits and having reserved codes to reset the dictionary and to indicate end of data.
Brotliis an example of a commonly-used coder that is initialised with a pre-defined dictionary, but later goes on to use more sophisticated content modelling. The Brotli dictionary consists largely of natural-language words and HTML and JavaScript fragments, based on an analysis of web traffic.[3]
|
https://en.wikipedia.org/wiki/Dictionary_coder
|
Antiquity
Medieval
Early modern
Modern
Iran
India
East-Asia
Karma(/ˈkɑːrmə/, fromSanskrit:कर्म,IPA:[ˈkɐɾmɐ]ⓘ;Pali:kamma) is an ancient Indian concept that refers to an action, work, or deed, and its effect or consequences.[1]InIndian religions, the term more specifically refers to a principle ofcause and effect, often descriptively called theprinciple of karma, wherein individuals' intent and actions (cause) influence their future (effect):[2]Good intent and good deeds contribute to good karma and happierrebirths, while bad intent and bad deeds contribute to bad karma and worse rebirths. In some scriptures, however, there is no link between rebirth and karma.[3][4]
InHinduism, karma is traditionally classified into four types: Sanchita karma (accumulated karma from past actions across lifetimes), Prārabdha karma (a portion of Sanchita karma that is currently bearing fruit and determines the circumstances of the present life), Āgāmi karma (future karma generated by present actions), and Kriyamāṇa karma (immediate karma created by current actions, which may yield results in the present or future).[5]
Karma is often misunderstood as fate, destiny, or predetermination.[6]Fate, destiny or predetermination has specific terminology in Sanskrit and is calledPrarabdha.
The concept of karma is closely associated with the idea of rebirth in many schools of Indian religions (particularlyin Hinduism,Buddhism,Jainism, andSikhism),[7]as well asTaoism.[8]In these schools, karma in the present affects one's future in the current life as well as the nature and quality of future lives—one'ssaṃsāra.[9][10]
ManyNew Agersbelieve in karma, treating it as a law of cause and effect that assures cosmic balance, although in some cases they stress that it is not a system that enforces punishment for past actions.[11]
The termkarma(Sanskrit:कर्म;Pali:kamma) refers to both the executed 'deed, work, action, act' and the 'object, intent'.[3]
Wilhelm Halbfass(2000) explains karma (karman) by contrasting it with theSanskritwordkriya:[3]whereaskriyais the activity along with the steps and effort in action,karmais (1) the executed action as a consequence of that activity, as well as (2) the intention of the actor behind an executed action or a planned action (described by some scholars[12]as metaphysical residue left in the actor). A good action creates good karma, as does good intent. A bad action creates bad karma, as does bad intent.[3]
Difficulty in arriving at a definition of karma arises because of the diversity of views among theschools of Hinduism; some, for example, considerkarmaandrebirthlinked and simultaneously essential, some consider karma but not rebirth to be essential, and a few discuss and conclude karma and rebirth to be flawed fiction.[13]BuddhismandJainismhave their own karma precepts. Thus, karma has not one, but multiple definitions and different meanings.[14]It is a concept whose meaning, importance, and scope varies between the various traditions that originated in India, and various schools in each of these traditions. According to Manu Doshi, all Aryan philosophies accept karma but Jainism has gone deeper into this subject.[15]Wendy O'Flahertyclaims that, furthermore, there is an ongoing debate regarding whether karma is a theory, a model, a paradigm, a metaphor, or ametaphysicalstance.[16]
Karmaalso refers to a conceptual principle that originated in India, often descriptively called theprinciple of karma, and sometimes thekarma-theoryor thelaw of karma.[17]
In the context of theory,karmais complex and difficult to define.[16]Different schools ofIndologyderive different definitions for the concept from ancient Indian texts; their definition is some combination of (1) causality that may beethicalor non-ethical; (2) ethicization, i.e., good or bad actions have consequences; and (3) rebirth.[16][18]Other Indologists include in the definition that which explains the present circumstances of an individual with reference to his or her actions in the past. These actions may be those in a person's current life, or, in some schools of Indian traditions, possibly actions from their past lives; furthermore, the consequences may result in the current life, or a person's future lives.[16][19]The law of karma operates independent of any deity or any process of divine judgment.[20]
A common theme to theories of karma is itsprinciple of causality.[17]This relationship between karma and causality is a central motif in all schools ofHindu,Buddhist, andJainthought.[21]One of the earliest associations of karma to causality occurs in theBrihadaranyaka Upanishadverses 4.4.5–6:
Now as a man is like this or like that,according as he acts and according as he behaves, so will he be;a man of good acts will become good, a man of bad acts, bad;he becomes pure by pure deeds, bad by bad deeds;And here they say that a person consists of desires,and as is his desire, so is his will;and as is his will, so is his deed;and whatever deed he does, that he will reap.
The theory of karma as causation holds that: (1) executed actions of an individual affects the individual and the life he or she lives, and (2) the intentions of an individual affects the individual and the life he or she lives. Disinterested actions, or unintentional actions do not have the same positive or negative karmic effect, as interested and intentional actions. In Buddhism, for example, actions that are performed, or arise, or originate without any bad intent, such as covetousness, are considered non-existent in karmic impact or neutral in influence to the individual.[24]
Another causality characteristic, shared by karmic theories, is thatlike deedslead tolike effects. Thus, good karma produces good effect on the actor, while bad karma produces bad effect. This effect may be material, moral, or emotional – that is, one's karma affects both one's happiness and unhappiness.[21]The effect of karma need not be immediate; the effect of karma can be later in one's current life, and in some schools it extends to future lives.[25]
The consequence or effects of one's karma can be described in two forms:phalaandsamskara. Aphala(lit.'fruit' or 'result') is the visible or invisible effect that is typically immediate or within the current life. In contrast, asamskara(Sanskrit:संस्कार) is an invisible effect, produced inside the actor because of the karma, transforming the agent and affecting his or her ability to be happy or unhappy in their current and future lives. The theory of karma is often presented in the context ofsamskaras.[21][26]
Karl Potter andHarold Cowardsuggest that karmic principle can also be understood as a principle of psychology and habit.[17][27][note 2]Karma seeds habits (vāsanā), and habits create the nature of man. Karma also seedsself perception, and perception influences how one experiences life-events. Both habits and self perception affect the course of one's life. Breaking bad habits is not easy: it requires conscious karmic effort.[17][29]Thus, psyche and habit, according to Potter and Coward, link karma to causality in ancient Indian literature.[17][27]The idea of karma may be compared to the notion of a person's 'character', as both are an assessment of the person and determined by that person's habitual thinking and acting.[10]
The second theme common to karma theories is ethicization. This begins with the premise that every action has a consequence,[9]which will come to fruition in either this life or a future life; thus, morally good acts will have positive consequences, whereas bad acts will produce negative results. An individual's present situation is thereby explained by reference to actions in his present or in previous lifetimes. Karma is not itself 'reward and punishment', but the law that produces consequence.[30]Wilhelm Halbfassnotes that good karma is considered asdharmaand leads topunya('merit'), while bad karma is consideredadharmaand leads topāp('demerit, sin').[31]
Reichenbach (1988) suggests that the theories of karma are anethical theory.[21]This is so because the ancient scholars of India linked intent and actual action to the merit, reward, demerit, and punishment. A theory without ethical premise would be a purecausal relation; the merit or reward or demerit or punishment would be same regardless of the actor's intention. In ethics, one's intentions, attitudes, and desires matter in the evaluation of one's action. Where the outcome is unintended, the moral responsibility for it is less on the actor, even though causal responsibility may be the same regardless.[21]A karma theory considers not only the action, but also the actor's intentions, attitude, and desires before and during the action. The karma concept thus encourages each person to seek and live a moral life, as well as avoid an immoral life. The meaning and significance of karma is thus as a building-block of an ethical theory.[32]
The third common theme of karma theories is the concept ofreincarnationor the cycle of rebirths (saṃsāra).[9][33][34]Rebirth is a fundamental concept ofHinduism,Buddhism, Jainism, and Sikhism.[10]Rebirth, orsaṃsāra, is the concept that all life forms go through a cycle of reincarnation, that is, a series of births and rebirths. The rebirths and consequent life may be in different realm, condition, or form. The karma theories suggest that the realm, condition, and form depends on the quality and quantity of karma.[35]In schools that believe in rebirth, every living being's soul transmigrates (recycles) after death, carrying the seeds of Karmic impulses from life just completed, into another life and lifetime of karmas.[9][14]This cycle continues indefinitely, except for those who consciously break this cycle by reachingmoksha. Those who break the cycle reach the realm of gods, those who do not continue in the cycle.
The concept has been intensely debated in ancient literature of India; with different schools of Indian religions considering the relevance of rebirth as either essential, or secondary, or unnecessary fiction.[13]Hiriyanna (1949) suggests rebirth to be a necessary corollary of karma;[36]Yamunacharya (1966) asserts that karma is a fact, while reincarnation is a hypothesis;[37]and Creel (1986) suggests that karma is a basic concept, rebirth is a derivative concept.[38]
The theory of 'karma and rebirth' raises numerous questions – such as how, when, and why did the cycle start in the first place, what is the relative Karmic merit of one karma versus another and why, and what evidence is there that rebirth actually happens, among others. Various schools of Hinduism realized these difficulties, debated their own formulations – some reaching what they considered as internally consistent theories – while other schools modified and de-emphasized it; a few schools in Hinduism such asCharvakas(or Lokayata) abandoned the theory of 'karma and rebirth' altogether.[3][31][39][40]Schools of Buddhism consider karma-rebirth cycle as integral to their theories ofsoteriology.[41][42]
TheVedic Sanskritwordkárman-(nominativekárma) means 'work' or 'deed',[44]often used in the context ofSrautarituals.[45]In theRigveda, the word occurs some 40 times.[44]InSatapatha Brahmana1.7.1.5,sacrificeis declared as the "greatest" of works;Satapatha Brahmana10.1.4.1 associates the potential of becomingimmortal(amara) with the karma of theagnicayanasacrifice.[44]
In the early Vedic literature, the concept of karma is also present beyond the realm of rituals or sacrifices. The Vedic language includes terms for sins and vices such as āgas, agha, enas, pāpa/pāpman, duṣkṛta, as well as for virtues and merit like sukṛta and puṇya, along with the neutral term karman.
Whatever good deed man does that is inside the Vedi; and whatever evil he does that is outside the Vedi.
The verse refers to the evaluation of virtuous and sinful actions in the afterlife. Regardless of their application in rituals (whether within or outside the Vedi), the concepts of good and evil here broadly represent merits and sins.
What evil is done here by man, that it (i.e. speech =Brahman) makes manifest. Although he thinks that he does it secretly, as it were, still it makes it manifest. Verily, therefore one should not commit evil.
This is the eternal greatness of the Brahmin. He does not increase by kárman, nor does he become less. Hisātmanknows the path. Knowing him (the ātman) one is not polluted by evil karman.
The Vedic words for "action" and "merit" in pre-Upaniṣadic texts carry moral significance and are not solely linked to ritual practices. The word karman simply means "action," which can be either positive or negative, and is not always associated with religious ceremonies; its predominant association with ritual in the Brāhmaṇa texts is likely a reflection of their ritualistic nature. In the same vein, sukṛta (and subsequently, puṇya) denotes any form of "merit," whether it be ethical or ritualistic. In contrast, terms such as pāpa and duṣkṛta consistently represent morally wrong actions.[46]
The earliest clear discussion of the karma doctrine is in theUpanishads.[9][44]The doctrine occurs here in the context of a discussion of the fate of the individual after death.[47]For example, causality and ethicization is stated inBṛhadāraṇyaka Upaniṣad3.2.13:[48][49]
Truly, one becomes good through gooddeeds, and evil through evildeeds.
Some authors state that thesamsara(transmigration) and karma doctrine may be non-Vedic, and the ideas may have developed in the "shramana" traditions that precededBuddhismandJainism.[50]Others state that some of the complex ideas of the ancient emerging theory of karma flowed from Vedic thinkers to Buddhist and Jain thinkers.[16][51]The mutual influences between the traditions is unclear, and likely co-developed.[52]
Many philosophical debates surrounding the concept are shared by the Hindu, Jain, and Buddhist traditions, and the early developments in each tradition incorporated different novel ideas.[53]For example, Buddhists allowed karma transfer from one person to another and sraddha rites, but had difficulty defending the rationale.[53][54]In contrast, Hindu schools and Jainism would not allow the possibility of karma transfer.[55][56]
The concept of karma in Hinduism developed and evolved over centuries. The earliestUpanishadsbegan with the questions about how and why man is born, and what happens after death. As answers to the latter, the early theories in these ancient Sanskrit documents includepancagni vidya(the five fire doctrine),pitryana(the cyclic path of fathers), anddevayana(the cycle-transcending, path of the gods).[57]Those who perform superficial rituals and seek material gain, claimed these ancient scholars, travel the way of their fathers and recycle back into another life; those who renounce these, go into the forest and pursue spiritual knowledge, were claimed to climb into the higher path of the gods. It is these who break the cycle and are not reborn.[58]With the composition of the Epics – the common man's introduction todharmain Hinduism – the ideas of causality and essential elements of the theory of karma were being recited in folk stories. For example:
As a man himself sows, so he himself reaps; no man inherits the good or evil act of another man. The fruit is of the same quality as the action.
The 6th chapter of theAnushasana Parva(the Teaching Book), the 13th book of theMahabharata, opens withYudhishthiraaskingBhishma: "Is the course of a person's life already destined, or can human effort shape one's life?"[60]The future, replies Bhishma, is both a function of current human effort derived from free will and past human actions that set the circumstances.[61]Over and over again, the chapters of Mahabharata recite the key postulates of karma theory. That is: intent and action (karma) has consequences; karma lingers and doesn't disappear; and, all positive or negative experiences in life require effort and intent.[62]For example:
Happiness comes due to good actions, suffering results from evil actions,by actions, all things are obtained, by inaction, nothing whatsoever is enjoyed.If one's action bore no fruit, then everything would be of no avail,if the world worked from fate alone, it would be neutralized.
Over time, various schools of Hinduism developed many different definitions of karma, some making karma appear quite deterministic, while others make room for free will and moral agency.[14]Among the six most studied schools of Hinduism, the theory of karma evolved in different ways, as their respective scholars reasoned and attempted to address the internal inconsistencies, implications and issues of the karma doctrine. According to ProfessorWilhelm Halbfass,[3]
The above schools illustrate the diversity of views, but are not exhaustive. Each school has sub-schools in Hinduism, such as that of non-dualism and dualism under Vedanta. Furthermore, there are other schools of Indian philosophy, such asCharvaka(or Lokayata; thematerialists), that denied the theory of karma-rebirth, as well as the existence of God; to this non-Vedic school, the properties of things come from the nature of things.Causalityemerges from the interaction, actions, and nature of things and people, making determinative principles such as karma or God unnecessary.[70][71]
Karma andkarmaphalaare fundamental concepts in Buddhism,[72][73]which explain how our intentional actions keep us tied to rebirth insamsara, whereas the Buddhist path, as exemplified in theNoble Eightfold Path, shows us the way out ofsamsara.[74][75]
The cycle of rebirth is determined by karma, literally 'action'.[76][note 4]Karmaphala(whereinphalameans 'fruit, result')[82][83][84]refers to the 'effect' or 'result' of karma.[85][72]The similar termkarmavipaka(whereinvipākameans 'ripening') refers to the 'maturation, ripening' of karma.[83][86][87]
In the Buddhist tradition,karmarefers to actions driven by intention (cetanā),[88][89][84][note 5]a deed done deliberately through body, speech or mind, which leads to future consequences.[92]TheNibbedhika Sutta,Anguttara Nikaya6.63:
Intention (cetana) I tell you, is kamma. Intending, one does kamma by way of body, speech, & intellect.[93][note 6]
How these intentional actions lead to rebirth, and how the idea of rebirth is to be reconciled with the doctrines ofimpermanenceandno-self,[95][note 7]is a matter of philosophical inquiry in the Buddhist traditions, for which several solutions have been proposed.[76]In early Buddhism, no explicit theory of rebirth and karma is worked out,[79]and "the karma doctrine may have been incidental to early Buddhist soteriology."[80][81]In early Buddhism, rebirth is ascribed to craving or ignorance.[77][78]Unlike that of Jains, Buddha's teaching of karma is not strictly deterministic, but incorporated circumstantial factors such as otherNiyamas.[96][97][note 8]It is not a rigid and mechanical process, but a flexible, fluid and dynamic process.[98]There is no set linear relationship between a particular action and its results.[97]The karmic effect of a deed is not determined solely by the deed itself, but also by the nature of the person who commits the deed, and by the circumstances in which it is committed.[97][99]Karmaphalais not a "judgement" enforced by a God, Deity or other supernatural being that controls the affairs of the Cosmos. Rather,karmaphalais the outcome of a natural process of cause and effect.[note 9]Within Buddhism, the real importance of the doctrine of karma and its fruits lies in the recognition of the urgency to put a stop to the whole process.[101][102]TheAcintita Suttawarns that "the results of karma" is one of the four incomprehensible subjects (oracinteyya),[103][104]subjects that are beyond all conceptualization,[103]and cannot be understood with logical thought or reason.[note 10]
Nichiren Buddhismteaches that transformation and change through faith and practice changes adverse karma—negative causes made in the past that result in negative results in the present and future—to positive causes for benefits in the future.[108]
InJainism, karma conveys a totally different meaning from that commonly understood inHindu philosophyand western civilization.[109]Jain philosophyis one of the oldest Indian philosophy that completely separates body (matter) from the soul (pure consciousness).[110]In Jainism, karma is referred to as karmic dirt, as it consists of very subtle particles of matter that pervade the entire universe.[111]Karmas are attracted to the karmic field of a soul due to vibrations created by activities of mind, speech, and body as well as various mental dispositions. Hence the karmas are thesubtle mattersurrounding theconsciousnessof a soul. When these two components (consciousness and karma) interact, we experience the life we know at present.Jain textsexpound that seventattvas(truths or fundamentals) constitute reality. These are:[112]
According toPadmanabh Jaini,
This emphasis on reaping the fruits only of one's own karma was not restricted to the Jainas; both Hindus and Buddhist writers have produced doctrinal materials stressing the same point. Each of the latter traditions, however, developed practices in basic contradiction to such belief. In addition toshrardha(the ritual Hindu offerings by the son of deceased), we find among Hindus widespread adherence to the notion of divine intervention in ones fate, while Buddhists eventually came to propound such theories like boon-granting bodhisattvas, transfer of merit and like. Only the Jainas have been absolutely unwilling to allow such ideas to penetrate their community, despite the fact that there must have been tremendous amount of social pressure on them to do so.[113]
The relationship between the soul and karma, states Padmanabh Jaini, can be explained with the analogy of gold. Like gold is always found mixed with impurities in its original state, Jainism holds that the soul is not pure at its origin but is always impure and defiled like natural gold. One can exert effort and purify gold, similarly, Jainism states that the defiled soul can be purified by proper refining methodology.[114]Karma either defiles the soul further, or refines it to a cleaner state, and this affects future rebirths.[115]Karma is thus anefficient cause(nimitta) in Jain philosophy, but not thematerial cause(upadana). The soul is believed to be the material cause.[116]
The key points where the theory of karma in Jainism can be stated as follows:
There are eight types of Karma which attach a soul to Samsara (the cycle of birth and death):[119][120]
InSikhism, all living beings are described as being under the influence of the three qualities ofmaya. Always present together in varying mix and degrees, these three qualities ofmayabind the soul to the body and to the earth plane. Above these three qualities is the eternal time. Due to the influence of three modes ofmaya'snature,jivas(individual beings) perform activities under the control and purview of the eternal time. These activities are calledkarma, wherein the underlying principle is that karma is the law that brings back the results of actions to the person performing them.
This life is likened to a field in which our karma is the seed. We harvest exactly what we sow; no less, no more. This infallible law of karma holds everyone responsible for what the person is or is going to be. Based on the total sum of past karma, some feel close to the Pure Being in this life and others feel separated. This is the law of karma inGurbani(Sri Guru Granth Sahib). Like other Indian and oriental schools of thought, the Gurbani also accepts the doctrines of karma and reincarnation as the facts of nature.[121]
David Ownby, a scholar of Chinese history at the University of Montreal,[122]asserts thatFalun Gongdiffers from Buddhism in its definition of the term "karma" in that it is taken not as a process of award and punishment, but as an exclusively negative term. The Chinese termde, or 'virtue', is reserved for what might otherwise be termed 'good karma' in Buddhism. Karma is understood as the source of all suffering – what Buddhism might refer to as 'bad karma'. According toLi Hongzhi, the founder of Falun Gong: "A person has done bad things over his many lifetimes, and for people this results in misfortune, or for cultivators, its karmic obstacles, so there's birth, aging, sickness, and death. This is ordinary karma."[123]
Falun Gong teaches that the spirit is locked in the cycle of rebirth, also known assamsara,[124]due to the accumulation of karma.[125]This is a negative, black substance that accumulates in other dimensions lifetime after lifetime, by doing bad deeds and thinking bad thoughts. Falun Gong states that karma is the reason for suffering, and what ultimately blocks people from the truth of the universe and attainingenlightenment. At the same time, karma is also the cause of one's continued rebirth and suffering.[125]Li says that due to accumulation of karma, the human spirit upon death will reincarnate over and over again, until the karma is paid off or eliminated through cultivation, or the person is destroyed due to the bad deeds he has done.[125]
Ownby regards the concept of karma as a cornerstone to individual moral behaviour in Falun Gong, and also readily traceable to the Christian doctrine of "one reaps what one sows". Others sayMatthew 5:44means no unbeliever will not fully reap what they sow until they are judged by God after death in Hell. Ownby says Falun Gong is differentiated by a "system of transmigration", although, "in which each organism is the reincarnation of a previous life form, its current form having been determined by karmic calculation of the moral qualities of the previous lives lived." Ownby says the seeming unfairness of manifest inequities can then be explained, at the same time allowing a space for moral behaviour in spite of them.[126]In the same vein of Li'smonism, matter and spirit are one, karma is identified as a black substance which must be purged in the process of cultivation.[123]
According to Li,
Human beings all fell here from the many dimensions of the universe. They no longer met the requirements of the Fa at their given levels in the universe, and thus had to drop down. Just as we have said before, the heavier one's mortal attachments, the further down one drops, with the descent continuing until one arrives at the state of ordinary human beings.[127]
He says that, in the eyes of higher beings, the purpose of human life is not merely to be human, but to awaken quickly on Earth, a "setting of delusion," and return. "That is what they really have in mind; they are opening a door for you. Those who fail to return will have no choice but toreincarnate, with this continuing until they amass a huge amount of karma and are destroyed."[127]
Ownby regards this as the basis for Falun Gong's apparent "opposition to practitioners' takingmedicinewhen ill; they are missing an opportunity to work off karma by allowing an illness to run its course (suffering depletes karma) or to fight theillnessthrough cultivation."Benjamin Pennyshares this interpretation. Since Li believes that "karma is the primary factor that causes sickness in people," Penny asks: "if disease comes from karma and karma can be eradicated through cultivation ofxinxing, then what good will medicine do?"[128]Li himself states that he is not forbidding practitioners from taking medicine, maintaining that "What I'm doing is telling people the relationship between practicing cultivation and medicine-taking." Li also states that "An everyday person needs to take medicine when he gets sick."[129]Danny Schechter (2001) quotes a Falun Gong student who says "It is always an individual choice whether one should take medicine or not."[130]
Karma is an important concept inTaoism. Every deed is tracked by deities and spirits. Appropriate rewards or retribution follow karma, just like a shadow follows a person.[8]
The karma doctrine of Taoism developed in three stages.[131]In the first stage, causality between actions and consequences was adopted, with supernatural beings keeping track of everyone's karma and assigning fate (ming). In the second phase, transferability of karma ideas from Chinese Buddhism were expanded, and a transfer or inheritance of Karmic fate from ancestors to one's current life was introduced. In the third stage of karma doctrine development, ideas of rebirth based on karma were added. One could be reborn either as another human being or another animal, according to this belief. In the third stage, additional ideas were introduced; for example, rituals, repentance and offerings at Taoist temples were encouraged as it could alleviate Karmic burden.[131][132]
Interpreted asmusubi(産霊), a view of karma is recognized inShintoas a means of enriching, empowering, and affirming life.[133]Musubihas fundamental significance in Shinto, because creative development forms the basis of the Shinto worldview.[134]
Many deities are connected to musubi and have it in their names.
One of the significant controversies with the karma doctrine is whether it always impliesdestiny, and its implications on free will. This controversy is also referred to as themoral agencyproblem;[135]the controversy is not unique to karma doctrine, but also found in some form inmonotheistic religions.[136]
The free will controversy can be outlined in three parts:[135]
The explanations and replies to the above free will problem vary by the specific school of Hinduism, Buddhism and Jainism. The schools of Hinduism, such asYogaandAdvaita Vedanta, that have emphasized current life over the dynamics of karma residue moving across past lives, allow free will.[14]Their argument, as well of other schools, are threefold:
Other schools of Hinduism, as well as Buddhism and Jainism that do consider cycle of rebirths central to their beliefs and that karma from past lives affects one's present, believe that both free will (cetanā) and karma can co-exist; however, their answers have not persuaded all scholars.[135][139]
Another issue with the theory of karma is that it is psychologically indeterminate, suggests Obeyesekere (1968).[140]That is, if no one can know what their karma was in previous lives, and if the karma from past lives can determine one's future, then the individual is psychologically unclear what if anything he or she can do now to shape the future, be more happy, or reduce suffering. If something goes wrong, such as sickness or failure at work, the individual is unclear if karma from past lives was the cause, or the sickness was caused by curable infection and the failure was caused by something correctable.[140]
This psychological indeterminacy problem is also not unique to the theory of karma; it is found in every religion adopting the premise that God has a plan, or in some way influences human events. As with the karma-and-free-will problem above, schools that insist on primacy of rebirths face the most controversy. Their answers to the psychological indeterminacy issue are the same as those for addressing the free will problem.[139]
Some schools of Indian religions, particularly withinBuddhism, allow transfer of karma merit and demerit from one person to another. This transfer is an exchange of non-physical quality just like an exchange of physical goods between two human beings. The practice of karma transfer, or even its possibility, is controversial.[39][141]Karma transfer raises questions similar to those withsubstitutionary atonementand vicarious punishment. It undermines the ethical foundations, and dissociates the causality and ethicization in the theory of karma from the moral agent. Proponents of some Buddhist schools suggest that the concept of karma merit transfer encourages religious giving and that such transfers are not a mechanism to transfer bad karma (i.e., demerit) from one person to another.
In Hinduism, Sraddha rites during funerals have been labelled as karma merit transfer ceremonies by a few scholars, a claim disputed by others.[142]Other schools in Hinduism, such as theYogaandAdvaita Vedanticphilosophies, and Jainism hold that karma can not be transferred.[16][18]
There has been an ongoing debate about karma theory and how it answers theproblem of eviland related problem oftheodicy. The problem of evil is a significant question debated in monotheistic religions with two beliefs:[143]
The problem of evil is then stated in formulations such as, "why does the omnibenevolent, omniscient and omnipotent God allow any evil and suffering to exist in the world?" SociologistMax Weberextended the problem of evilto Eastern traditions.[144]
The problem of evil, in the context of karma, has been long discussed in Eastern traditions, both in theistic and non-theistic schools; for example, inUttara MīmāṃsāSutras Book 2 Chapter 1;[145][146]the 8th century arguments byAdi SankarainBrahma Sutrabhasyawhere he posits that God cannot reasonably be the cause of the world because there exists moral evil, inequality, cruelty and suffering in the world;[147][148]and the 11th century theodicy discussion byRamanujainSri Bhasya.[149]Epics such as theMahabharata, for example, suggest three prevailing theories in ancient India as to why good and evil exist – one being that everything is ordained by God, another being karma, and a third citing chance events (yadrccha, यदृच्छा).[150][151]TheMahabharata, which includes Hindu deityVishnuin theavatarofKrishnaas one of the central characters, debates the nature and existence of suffering from these three perspectives, and includes a theory of suffering as arising from an interplay of chance events (such as floods and other events of nature), circumstances created by past human actions, and the current desires, volitions, dharma, adharma and current actions (purusakara) of people.[150][152][153]However, while karma theory in theMahabharatapresents alternative perspectives on the problem of evil and suffering, it offers no conclusive answer.[150][154]
Other scholars[155]suggest thatnontheisticIndian religious traditions do not assume an omnibenevolent creator, and some[156]theistic schools do not define or characterize their God(s) as monotheistic Western religions do and the deities have colorful, complex personalities; the Indian deities are personal and cosmic facilitators, and in some schools conceptualized like Plato'sDemiurge.[149]Therefore, the problem of theodicy in many schools of major Indian religions is not significant, or at least is of a different nature than in Western religions.[157]Many Indian religions place greater emphasis on developing the karma principle for first cause and innate justice with Man as focus, rather than developing religious principles with the nature and powers of God and divine judgment as focus.[158]Some scholars, particularly of theNyaya schoolof Hinduism and Sankara inBrahma Sutra bhasya, have posited that karma doctrine implies existence of god, who administers and affects the person's environment given that person's karma, but then acknowledge that it makes karma as violable, contingent and unable to address the problem of evil.[159]Arthur Herman states that karma-transmigration theory solves all three historical formulations to the problem of evil while acknowledging the theodicy insights of Sankara and Ramanuja.[160]
Some theistic Indian religions, such as Sikhism, suggest evil and suffering are a human phenomenon and arises from the karma of individuals.[161]In other theistic schools such as those in Hinduism, particularly its Nyaya school, karma is combined withdharmaand evil is explained as arising from human actions and intent that is in conflict with dharma.[149]In nontheistic religions such as Buddhism, Jainism and the Mimamsa school of Hinduism, karma theory is used to explain the cause of evil as well as to offer distinct ways to avoid or be unaffected by evil in the world.[147]
Those schools of Hinduism, Buddhism, and Jainism that rely on karma-rebirth theory have been critiqued for their theological explanation of suffering in children by birth, as the result of his or her sins in a past life.[162]Others disagree, and consider the critique as flawed and a misunderstanding of the karma theory.[163]
Western culture, influenced by Christianity,[7]holds a notion similar to karma, as demonstrated in the phrase "what goes around comes around".
Mary Jo Meadow suggests karma is akin to "Christian notions ofsinand its effects."[164]She states that the Christian teaching on aLast Judgmentaccording to one's charity is a teaching on karma.[164]Christianity also teaches morals such asone reaps what one sows(Galatians6:7) andlive by the sword, die by the sword(Matthew26:52).[165]Most scholars, however, consider the concept of Last Judgment as different from karma, with karma as an ongoing process that occurs every day in one's life, while Last Judgment, by contrast, is a one-time review at the end of life.[166]
There is a concept in Judaism called in Hebrewmidah k'neged midah, which is often translated as "measure for measure".[167]The concept is used not so much in matters of law, but rather in matters ofdivine retributionfor a person's actions.David Wolpecomparedmidah k'neged midahto karma.[168]
Carl Jungonce opined on unresolved emotions and thesynchronicityof karma;
When an inner situation is not made conscious, it appears outside as fate.[169]
Popular methods for negatingcognitive dissonanceincludemeditation,metacognition,counselling,psychoanalysis, etc., whose aim is to enhance emotional self-awareness and thus avoid negative karma. This results in better emotional hygiene and reduced karmic impacts.[170]Permanent neuronal changes within theamygdalaand leftprefrontal cortexof the human brain attributed to long-term meditation and metacognition techniques have been proven scientifically.[171]This process of emotional maturation aspires to a goal ofIndividuationorself-actualisation. Suchpeak experiencesare hypothetically devoid of any karma (nirvanaormoksha).
The idea of karma was popularized in theWestern worldthrough the work of theTheosophical Society. In this conception, karma was a precursor to theNeopaganlaw of returnorThreefold Law,the idea that the beneficial or harmful effects one has on the world will return to oneself. Colloquially this may be summed up as 'what goes around comes around.'
TheosophistI. K. Taimniwrote, "Karma is nothing but the Law of Cause and Effect operating in the realm of human life and bringing about adjustments between an individual and other individuals whom he has affected by his thoughts, emotions and actions."[172]Theosophyalso teaches that when humans reincarnate they come back as humans only, not as animals or other organisms.[173]
|
https://en.wikipedia.org/wiki/Karma
|
Phishingis a form ofsocial engineeringand ascamwhere attackers deceive people into revealingsensitive information[1]or installingmalwaresuch asviruses,worms,adware, orransomware. Phishing attacks have become increasingly sophisticated and often transparently mirror the site being targeted, allowing the attacker to observe everything while the victim navigates the site, and transverses any additional security boundaries with the victim.[2]As of 2020, it is the most common type ofcybercrime, with theFederal Bureau of Investigation'sInternet Crime Complaint Centerreporting more incidents of phishing than any other type of cybercrime.[3]
The term "phishing" was first recorded in 1995 in thecrackingtoolkitAOHell, but may have been used earlier in the hacker magazine2600.[4][5][6]It is a variation offishingand refers to the use of lures to "fish" for sensitive information.[5][7][8]
Measures to prevent or reduce the impact of phishing attacks includelegislation, user education, public awareness, and technical security measures.[9]The importance of phishing awareness has increased in both personal and professional settings, with phishing attacks among businesses rising from 72% in 2017 to 86% in 2020,[10]already rising to 94% in 2023.[11]
Phishing attacks, often delivered viaemail spam, attempt to trick individuals into giving away sensitive information or login credentials. Most attacks are "bulk attacks" that are not targeted and are instead sent in bulk to a wide audience.[12]The goal of the attacker can vary, with common targets including financial institutions, email and cloud productivity providers, and streaming services.[13]The stolen information or access may be used to steal money, installmalware, or spear phish others within the target organization.[14]Compromised streaming service accounts may also be sold ondarknet markets.[15]
This type ofsocial engineeringattack can involve sending fraudulent emails or messages that appear to be from a trusted source, such as a bank or government agency. These messages typically redirect to a fake login page where users are prompted to enter their credentials.
Spear phishing is a targeted phishing attack that uses personalized messaging, especially e‑mails,[16]to trick a specific individual or organization into believing they are legitimate. It often utilizes personal information about the target to increase the chances of success.[17][18][19][20]These attacks often target executives or those in financial departments with access to sensitive financial data and services. Accountancy and audit firms are particularly vulnerable to spear phishing due to the value of the information their employees have access to.[21]
The Russian government-runThreat Group-4127 (Fancy Bear)(GRU Unit 26165) targetedHillary Clinton's2016 presidential campaignwith spear phishing attacks on over 1,800Googleaccounts, using theaccounts-google.comdomain to threaten targeted users.[22][23]
A study on spear phishing susceptibility among different age groups found that 43% of youth aged 18–25 years and 58% of older users clicked on simulated phishing links in daily e‑mails over 21 days. Older women had the highest susceptibility, while susceptibility in young users declined during the study, but remained stable among older users.[24]
Voice over IP(VoIP) is used in vishing or voice phishing attacks,[25]where attackers make automated phone calls to large numbers of people, often usingtext-to-speechsynthesizers, claiming fraudulent activity on their accounts. The attackers spoof the calling phone number to appear as if it is coming from a legitimate bank or institution. The victim is then prompted to enter sensitive information or connected to a live person who usessocial engineeringtactics to obtain information.[25]Vishing takes advantage of the public's lower awareness and trust in voice telephony compared to email phishing.[26]
SMS phishing[27]or smishing[28][29]is a type of phishing attack that usestext messagesfrom a cell phone orsmartphoneto deliver a bait message.[30]The victim is usually asked to click a link, call a phone number, or contact anemailaddress provided by the attacker. They may then be asked to provideprivate information, such as login credentials for other websites.
The difficulty in identifying illegitimate links can be compounded on mobile devices due to the limited display of URLs in mobile browsers.[31]
Smishing can be just as effective as email phishing, as many smartphones have fast internet connectivity. Smishing messages may also come from unusual phone numbers.[32]
Page hijacking involves redirecting users to malicious websites orexploit kitsthrough the compromise of legitimate web pages, often usingcross site scripting.Hackersmay insert exploit kits such asMPackinto compromised websites to exploit legitimate users visiting the server. Page hijacking can also involve the insertion of maliciousinline frames, allowing exploit kits to load. This tactic is often used in conjunction withwatering holeattacks on corporate targets.[33]
A relatively new trend in online scam activity is "quishing", which meansQR Codephishing. The term is derived from "QR" (Quick Response) codes and "phishing", as scammers exploit the convenience of QR Codes to trick users into giving up sensitive data, by scanning a code containing an embedded malicious web site link. Unlike traditional phishing, which relies on deceptive emails or websites, quishing uses QR Codes to bypass email filters[34][35]and increase the likelihood that victims will fall for the scam, as people tend to trust QR Codes and may not scrutinize them as carefully as a URL or email link. The bogus codes may be sent by email, social media, or in some cases hard copy stickers are placed over legitimate QR Codes on such things as advertising posters and car park notices.[36][37]When victims scan the QR Code with their phone or device, they are redirected to a fake website designed to steal personal information, login credentials, or financial details.[34]
As QR Codes become more widely used for things like payments, event check-ins, and product information, quishing is emerging as a significant concern for digital security. Users are advised to exercise caution when scanning unfamiliar QR Codes and ensure they are from trusted sources, although the UK'sNational Cyber Security Centrerates the risk as lower than other types of lure.[38]
Traditional phishing attacks are typically limited to capturing user credentials directly inputted into fraudulent websites. However, the advent ofMan-in-the-Middle(MitM) phishing techniques has significantly advanced the sophistication of these attacks, enabling cybercriminals to bypasstwo-factor authentication(2FA) mechanisms during a user's active session on a web service. MitM phishing attacks employ intermediary tools that intercept communication between the user and the legitimate service.
Evilginx, originally created as an open-source tool for penetration testing and ethical hacking, has been repurposed by cybercriminals for MitM attacks. Evilginx works like a middleman, passing information between the victim and the real website without saving passwords or login codes. This makes it harder for security systems to detect, since they usually look for phishing sites that store stolen data. By grabbing login tokens and session cookies instantly, attackers can break into accounts and use them just like the real user, for as long as the session stays active.
Attackers employ various methods, including phishing emails, social engineering tactics, or distributing malicious links via social media platforms. Once the victim interacts with the counterfeit site, the MitM tool intercepts the authentication process, effectively bypassing 2FA protections.[39]
Phishing attacks often involve creating fakelinksthat appear to be from a legitimate organization.[40]These links may usemisspelled URLsorsubdomainsto deceive the user. In the following example URL,http://www.yourbank.example.com/, it can appear to the untrained eye as though the URL will take the user to theexamplesection of theyourbankwebsite; this URL points to the "yourbank" (i.e. phishing subdomain) section of theexamplewebsite (fraudster's domain name). Another tactic is to make the displayed text for a link appear trustworthy, while the actual link goes to the phisher's site. To check the destination of a link, many email clients and web browsers will show the URL in the status bar when themouseis hovering over it. However, some phishers may be able to bypass this security measure.[41]
Internationalized domain names(IDNs) can be exploited viaIDN spoofing[42]orhomograph attacks[43]to allow attackers to create fake websites with visually identical addresses to legitimate ones. These attacks have been used by phishers to disguise malicious URLs using openURL redirectorson trusted websites.[44][45][46]An example of this is inhttp://www.exаmple.com/, where the third character is not theLatinletter 'a', but instead theCyrilliccharacter 'а'. When the victim clicks on the link, unaware that the third character is actually the Cyrillic letter 'а', they get redirected to the malicious sitehttp://www.xn--exmple-4nf.com/Even digital certificates, such asSSL, may not protect against these attacks as phishers can purchase valid certificates and alter content to mimic genuine websites or host phishing sites without SSL.[47]
Phishing often usessocial engineeringtechniques to trick users into performing actions such as clicking a link or opening an attachment, or revealing sensitive information. It often involves pretending to be a trusted entity and creating a sense of urgency,[48]like threatening to close or seize a victim's bank or insurance account.[49]
An alternative technique to impersonation-based phishing is the use offake newsarticles to trick victims into clicking on a malicious link. These links often lead to fake websites that appear legitimate,[50]but are actually run by attackers who may try to install malware or presentfake "virus" notificationsto the victim.[51]
Early phishing techniques can be traced back to the 1990s, whenblack hathackers and thewarezcommunity usedAOLto steal credit card information and commit other online crimes. The term "phishing" is said to have been coined by Khan C. Smith, a well-known spammer and hacker,[52]and its first recorded mention was found in the hacking toolAOHell, which was released in 1994. AOHell allowed hackers to impersonate AOL staff and sendinstant messagesto victims asking them to reveal their passwords.[53][54]In response, AOL implemented measures to prevent phishing and eventually shut down thewarez sceneon their platform.[55][56]
In the 2000s, phishing attacks became more organized and targeted. The first known direct attempt against a payment system,E-gold, occurred in June 2001, and shortly after theSeptember 11 attacks, a "post-9/11 id check" phishing attack followed.[57]The first known phishing attack against a retail bank was reported in September 2003.[58]Between May 2004 and May 2005, approximately 1.2 million computer users in the United States suffered losses caused by phishing, totaling approximatelyUS$929 million.[59]Phishing was recognized as a fully organized part of the black market, and specializations emerged on a global scale that provided phishing software for payment, which were assembled and implemented into phishing campaigns by organized gangs.[60][61]TheUnited Kingdombanking sector suffered from phishing attacks, with losses from web banking fraud almost doubling in 2005 compared to 2004.[62][63]In 2006, almost half of phishing thefts were committed by groups operating through the Russian Business Network based in St. Petersburg.[64]Email scams posing as theInternal Revenue Servicewere also used to steal sensitive data from U.S. taxpayers.[65]Social networking sitesare a prime target of phishing, since the personal details in such sites can be used inidentity theft;[66]In 2007, 3.6 million adults lostUS$3.2 billiondue to phishing attacks.[67]The Anti-Phishing Working Group reported receiving 115,370 phishing email reports from consumers with US and China hosting more than 25% of the phishing pages each in the third quarter of 2009.[68]
Phishing in the 2010s saw a significant increase in the number of attacks. In 2011, the master keys forRSASecurID security tokens were stolen through a phishing attack.[69][70]Chinese phishing campaigns also targeted high-ranking officials in the US and South Korean governments and military, as well as Chinese political activists.[71][72]According to Ghosh, phishing attacks increased from 187,203 in 2010 to 445,004 in 2012. In August 2013, Outbrain suffered a spear-phishing attack,[73]and in November 2013, 110 million customer and credit card records were stolen fromTargetcustomers through a phished subcontractor account.[74]CEO and IT security staff subsequently fired.[75]In August 2014, iCloud leaks of celebrity photos were based on phishing e-mails sent to victims that looked like they came from Apple or Google.[76]In November 2014, phishing attacks onICANNgained administrative access to the Centralized Zone Data System; also gained was data about users in the system - and access to ICANN's public Governmental Advisory Committee wiki, blog, and whois information portal.[77]Fancy Bear was linked to spear-phishing attacks against thePentagonemail system in August 2015,[78][79]and the group used a zero-day exploit of Java in a spear-phishing attack on the White House and NATO.[80][81]Fancy Bear carried out spear phishing attacks on email addresses associated with the Democratic National Committee in the first quarter of 2016.[82][83]In August 2016, members of theBundestagand political parties such asLinken-faction leaderSahra Wagenknecht,Junge Union, and theCDUofSaarlandwere targeted by spear-phishing attacks suspected to be carried out by Fancy Bear. In August 2016, theWorld Anti-Doping Agencyreported the receipt of phishing emails sent to users of its database claiming to be official WADA, but consistent with the Russian hacking group Fancy Bear.[84][85][86]In 2017, 76% of organizations experienced phishing attacks, with nearly half of theinformation securityprofessionals surveyed reporting an increase from 2016. In the first half of 2017, businesses and residents of Qatar were hit with over 93,570 phishing events in a three-month span.[87]In August 2017, customers ofAmazonfaced the Amazon Prime Day phishing attack, when hackers sent out seemingly legitimate deals to customers of Amazon. When Amazon's customers attempted to make purchases using the "deals", the transaction would not be completed, prompting the retailer's customers to input data that could be compromised and stolen.[88]In 2018, the company block.one, which developed the EOS.IO blockchain, was attacked by a phishing group who sent phishing emails to all customers aimed at intercepting the user's cryptocurrency wallet key, and a later attack targeted airdrop tokens.[89]
Phishing attacks have evolved in the 2020s to include elements of social engineering, as demonstrated by the July 15, 2020,Twitterbreach. In this case, a 17-year-old hacker and accomplices set up a fake website resembling Twitter's internalVPNprovider used by remote working employees. Posing as helpdesk staff, they called multiple Twitter employees, directing them to submit their credentials to the fake VPN website.[90]Using the details supplied by the unsuspecting employees, they were able to seize control of several high-profile user accounts, including those ofBarack Obama,Elon Musk,Joe Biden, andApple Inc.'s company account. The hackers then sent messages to Twitter followers solicitingBitcoin, promising to double the transaction value in return. The hackers collected 12.86 BTC (about $117,000 at the time).[91]In the 2020s, phishingas a service(PhaaS) platforms likeDarculaallow attackers to easily fake trusted websites.[92]
There are anti-phishing websites which publish exact messages that have been recently circulating the internet, such asFraudWatch Internationaland Millersmiles. Such sites often provide specific details about the particular messages.[93][94]
As recently as 2007, the adoption of anti-phishing strategies by businesses needing to protect personal and financial information was low.[95]There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing. These techniques include steps that can be taken by individuals, as well as by organizations. Phone, web site, and email phishing can now be reported to authorities, as describedbelow.
Effective phishing education, including conceptual knowledge[96]and feedback,[97][98]is an important part of any organization's anti-phishing strategy. While there is limited data on the effectiveness of education in reducing susceptibility to phishing,[99]much information on the threat is available online.[49]
Simulated phishingcampaigns, in which organizations test their employees' training by sending fake phishing emails, are commonly used to assess their effectiveness. One example is a study by theNational Library of Medicine, in which an organization received 858,200 emails during a 1-month testing period, with 139,400 (16%) being marketing and 18,871 (2%) being identified as potential threats. These campaigns are often used in the healthcare industry, as healthcare data is a valuable target for hackers. These campaigns are just one of the ways that organizations are working to combat phishing.[100]
Nearly all legitimate e-mail messages from companies to their customers contain an item of information that is not readily available to phishers. Some companies, for examplePayPal, always address their customers by their username in emails, so if an email addresses the recipient in a generic fashion ("Dear PayPal customer") it is likely to be an attempt at phishing.[101]Furthermore, PayPal offers various methods to determine spoof emails and advises users to forward suspicious emails to their spoof@PayPal.com domain to investigate and warn other customers. However it is unsafe to assume that the presence of personal information alone guarantees that a message is legitimate,[102]and some studies have shown that the presence of personal information does not significantly affect the success rate of phishing attacks;[103]which suggests that most people do not pay attention to such details.
Emails from banks and credit card companies often include partial account numbers, but research has shown that people tend to not differentiate between the first and last digits.[104]
A study on phishing attacks in game environments found thateducational gamescan effectively educate players against information disclosures and can increase awareness on phishing risk thus mitigating risks.[105]
TheAnti-Phishing Working Group, one of the largest anti-phishing organizations in the world, produces regular report on trends in phishing attacks.[106]
A wide range of technical approaches are available to prevent phishing attacks reaching users or to prevent them from successfully capturing sensitive information.
Specializedspam filterscan reduce the number of phishing emails that reach their addressees' inboxes. These filters use a number of techniques includingmachine learning[107]andnatural language processingapproaches to classify phishing emails,[108][109]and reject email with forged addresses.[110]
Another popular approach to fighting phishing is to maintain a list of known phishing sites and to check websites against the list. One such service is theSafe Browsingservice.[111]Web browsers such asGoogle Chrome,Internet Explorer7,Mozilla Firefox2.0,Safari3.2, andOperaall contain this type of anti-phishing measure.[112][113][114][115][116]Firefox 2usedGoogleanti-phishing software. Opera 9.1 uses liveblacklistsfromPhishtank,cysconandGeoTrust, as well as livewhitelistsfrom GeoTrust. Some implementations of this approach send the visited URLs to a central service to be checked, which has raised concerns aboutprivacy.[117]According to a report by Mozilla in late 2006, Firefox 2 was found to be more effective thanInternet Explorer 7at detecting fraudulent sites in a study by an independent software testing company.[118]
An approach introduced in mid-2006 involves switching to a special DNS service that filters out known phishing domains.[119]
To mitigate the problem of phishing sites impersonating a victim site by embedding its images (such aslogos), several site owners have altered the images to send a message to the visitor that a site may be fraudulent. The image may be moved to a new filename and the original permanently replaced, or a server can detect that the image was not requested as part of normal browsing, and instead send a warning image.[120][121]
TheBank of Americawebsite[122][123]was one of several that asked users to select a personal image (marketed asSiteKey) and displayed this user-selected image with any forms that request a password. Users of the bank's online services were instructed to enter a password only when they saw the image they selected. The bank has since discontinued the use of SiteKey. Several studies suggest that few users refrain from entering their passwords when images are absent.[124][125]In addition, this feature (like other forms oftwo-factor authentication) is susceptible to other attacks, such as those suffered by Scandinavian bankNordeain late 2005,[126]andCitibankin 2006.[127]
A similar system, in which an automatically generated "Identity Cue" consisting of a colored word within a colored box is displayed to each website user, is in use at other financial institutions.[128]
Security skins[129][130]are a related technique that involves overlaying a user-selected image onto the login form as a visual cue that the form is legitimate. Unlike the website-based image schemes, however, the image itself is shared only between the user and the browser, and not between the user and the website. The scheme also relies on amutual authenticationprotocol, which makes it less vulnerable to attacks that affect user-only authentication schemes.
Still another technique relies on a dynamic grid of images that is different for each login attempt. The user must identify the pictures that fit their pre-chosen categories (such as dogs, cars and flowers). Only after they have correctly identified the pictures that fit their categories are they allowed to enter their alphanumeric password to complete the login. Unlike the static images used on the Bank of America website, a dynamic image-based authentication method creates a one-time passcode for the login, requires active participation from the user, and is very difficult for a phishing website to correctly replicate because it would need to display a different grid of randomly generated images that includes the user's secret categories.[131]
Several companies offer banks and other organizations likely to suffer from phishing scams round-the-clock services to monitor, analyze and assist in shutting down phishing websites.[132]Automated detection of phishing content is still below accepted levels for direct action, with content-based analysis reaching between 80% and 90% of success[133]so most of the tools include manual steps to certify the detection and authorize the response.[134]Individuals can contribute by reporting phishing to both volunteer and industry groups,[135]such ascysconorPhishTank.[136]Phishing web pages and emails can be reported to Google.[137][138]
Organizations can implement two factor ormulti-factor authentication(MFA), which requires a user to use at least 2 factors when logging in. (For example, a user must both present asmart cardand apassword). This mitigates some risk, in the event of a successful phishing attack, the stolen password on its own cannot be reused to further breach the protected system. However, there are several attack methods which can defeat many of the typical systems.[139]MFA schemes such asWebAuthnaddress this issue by design.
On January 26, 2004, the U.S.Federal Trade Commissionfiled the first lawsuit against aCalifornianteenager suspected of phishing by creating a webpage mimickingAmerica Onlineand stealing credit card information.[140]Other countries have followed this lead by tracing and arresting phishers. A phishing kingpin, Valdir Paulo de Almeida, was arrested inBrazilfor leading one of the largest phishingcrime rings, which in two years stole betweenUS$18 millionandUS$37 million.[141]UK authorities jailed two men in June 2005 for their role in a phishing scam,[142]in a case connected to theU.S. Secret ServiceOperation Firewall, which targeted notorious "carder" websites.[143]In 2006, Japanese police arrested eight people for creating fake Yahoo Japan websites, netting themselves¥100 million(US$870,000)[144]and theFBIdetained a gang of sixteen in the U.S. and Europe in Operation Cardkeeper.[145]
SenatorPatrick Leahyintroduced the Anti-Phishing Act of 2005 toCongressin theUnited Stateson March 1, 2005. Thisbillaimed to impose fines of up to $250,000 and prison sentences of up to five years on criminals who used fake websites and emails to defraud consumers.[146]In the UK, theFraud Act 2006[147]introduced a general offense of fraud punishable by up to ten years in prison and prohibited the development or possession of phishing kits with the intention of committing fraud.[148]
Companies have also joined the effort to crack down on phishing. On March 31, 2005,Microsoftfiled 117 federal lawsuits in theU.S. District Court for the Western District of Washington. The lawsuits accuse "John Doe" defendants of obtaining passwords and confidential information. March 2005 also saw a partnership between Microsoft and theAustralian governmentteaching law enforcement officials how to combat various cyber crimes, including phishing.[149]Microsoft announced a planned further 100 lawsuits outside the U.S. in March 2006,[150]followed by the commencement, as of November 2006, of 129 lawsuits mixing criminal and civil actions.[151]AOLreinforced its efforts against phishing[152]in early 2006 with three lawsuits[153]seeking a total ofUS$18 millionunder the 2005 amendments to the Virginia Computer Crimes Act,[154][155]andEarthlinkhas joined in by helping to identify six men subsequently charged with phishing fraud inConnecticut.[156]
In January 2007, Jeffrey Brett Goodin of California became the first defendant convicted by a jury under the provisions of theCAN-SPAM Act of 2003. He was found guilty of sending thousands of emails toAOLusers, while posing as the company's billing department, which prompted customers to submit personal and credit card information. Facing a possible 101 years in prison for the CAN-SPAM violation and ten other counts includingwire fraud, the unauthorized use of credit cards, and the misuse of AOL's trademark, he was sentenced to serve 70 months. Goodin had been in custody since failing to appear for an earlier court hearing and began serving his prison term immediately.[157][158][159][160]
|
https://en.wikipedia.org/wiki/Phishing
|
Metadata(ormetainformation) is "datathat provides information about other data",[1]but not the content of the data itself, such as the text of a message or the image itself.[2]There are many distinct types of metadata, including:
Metadata is not strictly bound to one of these categories, as it can describe a piece of data in many other ways.
Metadata has various purposes. It can help usersfind relevant informationanddiscover resources. It can also help organize electronic resources, provide digital identification, and archive and preserve resources. Metadata allows users to access resources by "allowing resources to be found by relevant criteria, identifying resources, bringing similar resources together, distinguishing dissimilar resources, and giving location information".[8]Metadata oftelecommunicationactivities includingInternettraffic is very widely collected by various national governmental organizations. This data is used for the purposes oftraffic analysisand can be used formass surveillance.[9]
Metadata was traditionally used in thecard catalogsoflibrariesuntil the 1980s when libraries converted their catalog data to digitaldatabases.[10]In the 2000s, as data and information were increasingly stored digitally, this digital data was described usingmetadata standards.[11]
The first description of "meta data" for computer systems is purportedly noted by MIT's Center for International Studies experts David Griffel and Stuart McIntosh in 1967: "In summary then, we have statements in an object language about subject descriptions of data and token codes for the data. We also have statements in a meta language describing the data relationships and transformations, and ought/is relations between norm and data."[12]
Unique metadata standards exist for different disciplines (e.g.,museumcollections,digital audio files,websites, etc.). Describing thecontentsandcontextof data ordata filesincreases its usefulness. For example, aweb pagemay include metadata specifying what software language the page is written in (e.g., HTML), what tools were used to create it, what subjects the page is about, and where to find more information about the subject. This metadata can automatically improve the reader's experience and make it easier for users to find the web page online.[13]ACDmay include metadata providing information about the musicians, singers, and songwriters whose work appears on the disc.
In many countries, government organizations routinely store metadata about emails, telephone calls, web pages, video traffic, IP connections, and cell phone locations.[14]
Metadata means "data about data". Metadata is defined as the data providing information about one or more aspects of the data; it is used to summarize basic information about data that can make tracking and working with specific data easier.[15]Some examples include:
For example, adigital imagemay include metadata that describes the size of the image, its color depth, resolution, when it was created, the shutter speed, and other data.[16]A text document's metadata may contain information about how long the document is, who the author is, when the document was written, and a short summary of the document. Metadata within web pages can also contain descriptions of page content, as well as key words linked to the content.[17]These links are often called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s.[17]The reliance on metatags in web searches was decreased in the late 1990s because of "keyword stuffing",[17]whereby metatags were being largely misused to trick search engines into thinking some websites had more relevance in the search than they really did.[17]
Metadata can be stored and managed in adatabase, often called ametadata registryormetadata repository.[18]However, without context and a point of reference, it might be impossible to identify metadata just by looking at it.[19]For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into anequation –without any other context, the numbers themselves can be perceived as the data. But if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified asISBNs–information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, which is "structural metadata" i.e. "data about the containers of data"; rather than the alternative sense "content about individual instances of data content" or metacontent, the type of data usually found in library catalogs.[20][21]Since then the fields of information management, information science, information technology, librarianship, andGIShave widely adopted the term. In these fields, the wordmetadatais defined as "data about data".[22]While this is the generally accepted definition, various disciplines have adopted their own more specific explanations and uses of the term.
Slatereported in 2013 that the United States government's interpretation of "metadata" could be broad, and might include message content such as the subject lines of emails.[23]
While the metadata application is manifold, covering a large variety of fields, there are specialized and well-accepted models to specify types of metadata.Bretherton& Singley (1994) distinguish between two distinct classes: structural/control metadata and guide metadata.[24]Structural metadatadescribes the structure of database objects such as tables, columns, keys and indexes.Guide metadatahelps humans find specific items and is usually expressed as a set of keywords in a natural language. According toRalph Kimball, metadata can be divided into three categories:technical metadata(or internal metadata),business metadata(or external metadata), andprocess metadata.
NISOdistinguishes three types of metadata: descriptive, structural, and administrative.[22]Descriptive metadatais typically used for discovery and identification, as information to search and locate an object, such as title, authors, subjects, keywords, and publisher.Structural metadatadescribes how the components of an object are organized. An example of structural metadata would be how pages are ordered to form chapters of a book. Finally,administrative metadatagives information to help manage the source. Administrative metadata refers to the technical information, such as file type, or when and how the file was created. Two sub-types of administrative metadata are rights management metadata and preservation metadata.Rights management metadataexplainsintellectual property rights, whilepreservation metadatacontains information to preserve and save a resource.[8]
Statistical data repositories have their own requirements for metadata in order to describe not only the source and quality of the data[6]but also what statistical processes were used to create the data, which is of particular importance to the statistical community in order to both validate and improve the process of statistical data production.[7]
An additional type of metadata beginning to be more developed isaccessibility metadata. Accessibility metadata is not a new concept to libraries; however, advances in universal design have raised its profile.[25]: 213–214Projects like Cloud4All and GPII identified the lack of common terminologies and models to describe the needs and preferences of users and information that fits those needs as a major gap in providing universal access solutions.[25]: 210–211Those types of information are accessibility metadata.[25]: 214Schema.orghas incorporated several accessibility properties based on IMS Global Access for All Information Model Data Element Specification.[25]: 214The Wiki pageWebSchemas/Accessibilitylists several properties and their values. While the efforts to describe and standardize the varied accessibility needs of information seekers are beginning to become more robust, their adoption into established metadata schemas has not been as developed. For example, while Dublin Core (DC)'s "audience" and MARC 21's "reading level" could be used to identify resources suitable for users with dyslexia and DC's "format" could be used to identify resources available in braille, audio, or large print formats, there is more work to be done.[25]: 214
Metadata (metacontent) or, more correctly, the vocabularies used to assemble metadata (metacontent) statements, is typically structured according to a standardized concept using a well-defined metadata scheme, includingmetadata standardsandmetadata models. Tools such ascontrolled vocabularies,taxonomies,thesauri,data dictionaries, andmetadata registriescan be used to apply further standardization to the metadata. Structural metadata commonality is also of paramount importance indata modeldevelopment and indatabase design.
Metadata (metacontent) syntax refers to the rules created to structure the fields or elements of metadata (metacontent).[26]A single metadata scheme may be expressed in a number of different markup or programming languages, each of which requires a different syntax. For example, Dublin Core may be expressed in plain text,HTML,XML, andRDF.[27]
A common example of (guide) metacontent is the bibliographic classification, the subject, theDewey Decimal class number. There is always an implied statement in any "classification" of some object. To classify an object as, for example, Dewey class number 514 (Topology) (i.e. books having the number 514 on their spine) the implied statement is: "<book><subject heading><514>". This is a subject-predicate-object triple, or more importantly, a class-attribute-value triple. The first 2 elements of the triple (class, attribute) are pieces of some structural metadata having a defined semantic. The third element is a value, preferably from some controlled vocabulary, some reference (master) data. The combination of the metadata and master data elements results in a statement which is a metacontent statement i.e. "metacontent = metadata + master data". All of these elements can be thought of as "vocabulary". Both metadata and master data are vocabularies that can be assembled into metacontent statements. There are many sources of these vocabularies, both meta and master data: UML, EDIFACT, XSD, Dewey/UDC/LoC, SKOS, ISO-25964, Pantone, Linnaean Binomial Nomenclature, etc. Using controlled vocabularies for the components of metacontent statements, whether for indexing or finding, is endorsed byISO 25964: "If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved."[28]This is particularly relevant when considering search engines of the internet, such as Google. The process indexes pages and then matches text strings using its complex algorithm; there is no intelligence or "inferencing" occurring, just the illusion thereof.
Metadata schemata can be hierarchical in nature where relationships exist between metadata elements and elements are nested so that parent-child relationships exist between the elements.
An example of a hierarchical metadata schema is theIEEE LOMschema, in which metadata elements may belong to a parent metadata element.
Metadata schemata can also be one-dimensional, or linear, where each element is completely discrete from other elements and classified according to one dimension only.
An example of a linear metadata schema is theDublin Coreschema, which is one-dimensional.
Metadata schemata are often 2 dimensional, or planar, where each element is completely discrete from other elements but classified according to 2 orthogonal dimensions.[29]
The degree to which the data or metadata is structured is referred to as"granularity". "Granularity" refers to how much detail is provided. Metadata with a high granularity allows for deeper, more detailed, and more structured information and enables a greater level of technical manipulation. A lower level of granularity means that metadata can be created for considerably lower costs but will not provide as detailed information. The major impact of granularity is not only on creation and capture, but moreover on maintenance costs. As soon as the metadata structures become outdated, so too is the access to the referred data. Hence granularity must take into account the effort to create the metadata as well as the effort to maintain it.
In all cases where the metadata schemata exceed the planar depiction, some type of hypermapping is required to enable display and view of metadata according to chosen aspect and to serve special views. Hypermapping frequently applies to layering of geographical and geological information overlays.[30]
International standards apply to metadata. Much work is being accomplished in the national and international standards communities, especiallyANSI(American National Standards Institute) andISO(International Organization for Standardization) to reach a consensus on standardizing metadata and registries. The core metadata registry standard isISO/IEC11179 Metadata Registries (MDR), the framework for the standard is described in ISO/IEC 11179-1:2004.[31]A new edition of Part 1 is in its final stage for publication in 2015 or early 2016. It has been revised to align with the current edition of Part 3, ISO/IEC 11179-3:2013[32]which extends the MDR to support the registration of Concept Systems.
(seeISO/IEC 11179). This standard specifies a schema for recording both the meaning and technical structure of the data for unambiguous usage by humans and computers. ISO/IEC 11179 standard refers to metadata as information objects about data, or "data about data". In ISO/IEC 11179 Part-3, the information objects are data about Data Elements, Value Domains, and other reusable semantic and representational information objects that describe the meaning and technical details of a data item. This standard also prescribes the details for a metadata registry, and for registering and administering the information objects within a Metadata Registry. ISO/IEC 11179 Part 3 also has provisions for describing compound structures that are derivations of other data elements, for example through calculations, collections of one or more data elements, or other forms of derived data. While this standard describes itself originally as a "data element" registry, its purpose is to support describing and registering metadata content independently of any particular application, lending the descriptions to being discovered and reused by humans or computers in developing new applications, databases, or for analysis of data collected in accordance with the registered metadata content. This standard has become the general basis for other kinds of metadata registries, reusing and extending the registration and administration portion of the standard.
The Geospatial community has a tradition of specializedgeospatial metadatastandards, particularly building on traditions of map- and image-libraries and catalogs. Formal metadata is usually essential for geospatial data, as common text-processing approaches are not applicable.
TheDublin Coremetadata terms are a set of vocabulary terms that can be used to describe resources for the purposes of discovery. The original set of 15 classic[33]metadata terms, known as the Dublin Core Metadata Element Set[34]are endorsed in the following standards documents:
The W3C Data Catalog Vocabulary (DCAT)[38]is an RDF vocabulary that supplements Dublin Core with classes for Dataset, Data Service, Catalog, and Catalog Record. DCAT also uses elements from FOAF, PROV-O, and OWL-Time. DCAT provides an RDF model to support the typical structure of a catalog that contains records, each describing a dataset or service.
Although not a standard,Microformat(also mentioned in the sectionmetadata on the internetbelow) is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata. Microformat follows XHTML and HTML standards but is not a standard in itself. One advocate of microformats,Tantek Çelik, characterized a problem with alternative approaches:
Here's a new language we want you to learn, and now you need to output these additional files on your server. It's a hassle. (Microformats) lower the barrier to entry.[39]
Most common types ofcomputer filescan embed metadata, including documents, (e.g.Microsoft Officefiles,OpenDocumentfiles,PDF) images, (e.g.JPEG,PNG) Video files, (e.g.AVI,MP4) and audio files. (e.g.WAV,MP3)
Metadata may be added to files by users, but some metadata is often automatically added to files by authoring applications or by devices used to produce the files, without user intervention.
While metadata in files are useful for finding them, they can be aprivacyhazard when the files are shared. Usingmetadata removal toolsto clean files before sharing them can mitigate this risk.
Metadata may be written into adigital photofile that will identify who owns it, copyright and contact information, what brand or model of camera created the file, along with exposure information (shutter speed, f-stop, etc.) and descriptive information, such as keywords about the photo, making the file or image searchable on a computer and/or the Internet. Some metadata is created by the camera such as, color space, color channels, exposure time, and aperture (EXIF), while some is input by the photographer and/or software after downloading to a computer.[40]Most digital cameras write metadata about the model number, shutter speed, etc., and some enable you to edit it;[41]this functionality has been available on most Nikon DSLRs since theNikon D3, on most new Canon cameras since theCanon EOS 7D, and on most Pentax DSLRs since the Pentax K-3. Metadata can be used to make organizing in post-production easier with the use of key-wording. Filters can be used to analyze a specific set of photographs and create selections on criteria like rating or capture time. On devices with geolocation capabilities likeGPS(smartphones in particular), the location the photo was taken from may also be included.
Photographic Metadata Standards are governed by organizations that develop the following standards. They include, but are not limited to:
Metadata is particularly useful in video, where information about its contents (such as transcripts of conversations and text descriptions of its scenes) is not directly understandable by a computer, but where an efficient search of the content is desirable. This is particularly useful in video applications such asAutomatic Number Plate Recognitionand Vehicle Recognition Identification software, wherein license plate data is saved and used to create reports and alerts.[43]There are 2 sources in which video metadata is derived: (1) operational gathered metadata, that is information about the content produced, such as the type of equipment, software, date, and location; (2) human-authored metadata, to improve search engine visibility, discoverability, audience engagement, and providing advertising opportunities to video publishers.[44]Avid's MetaSync and Adobe's Bridge are examples of professional video editing software with access to metadata.[45]
Information on the times, origins and destinations of phone calls, electronic messages, instant messages, and other modes of telecommunication, as opposed to message content, is another form of metadata. Bulk collection of thiscall detail recordmetadata by intelligence agencies has proven controversial after disclosures byEdward Snowdenof the fact that certain Intelligence agencies such as theNSAhad been (and perhaps still are) keeping online metadata on millions of internet users for up to a year, regardless of whether or not they [ever] were persons of interest to the agency.
Geospatial metadata relates to Geographic Information Systems (GIS) files, maps, images, and other data that is location-based. Metadata is used in GIS to document the characteristics and attributes of geographic data, such as database files and data that is developed within a GIS. It includes details like who developed the data, when it was collected, how it was processed, and what formats it's available in, and then delivers the context for the data to be used effectively.[46]
Metadata can be created either by automated information processing or by manual work. Elementary metadata captured by computers can include information about when an object was created, who created it, when it was last updated, file size, and file extension. In this context anobjectrefers to any of the following:
Ametadata enginecollects, stores and analyzes information about data and metadata in use within a domain.[47]
Data virtualization emerged in the 2000s as the new software technology to complete the virtualization "stack" in the enterprise. Metadata is used in data virtualization servers which are enterprise infrastructure components, alongside database and application servers. Metadata in these servers is saved as persistent repository and describebusiness objectsin various enterprise systems and applications. Structural metadata commonality is also important to support data virtualization.
Standardization and harmonization work has brought advantages to industry efforts to build metadata systems in the statistical community.[48][49]Several metadata guidelines and standards such as the European Statistics Code of Practice[50]and ISO 17369:2013 (Statistical Data and Metadata Exchangeor SDMX)[48]provide key principles for how businesses, government bodies, and other entities should manage statistical data and metadata. Entities such asEurostat,[51]European System of Central Banks,[51]and theU.S. Environmental Protection Agency[52]have implemented these and other such standards and guidelines with the goal of improving "efficiency when managing statistical business processes".[51]
Metadata has been used in various ways as a means of cataloging items in libraries in both digital and analog formats. Such data helps classify, aggregate, identify, and locate a particular book, DVD, magazine, or any object a library might hold in its collection.[53]Until the 1980s, many library catalogs used 3x5 inch cards in file drawers to display a book's title, author, subject matter, and an abbreviatedalpha-numericstring (call number) which indicated the physical location of the book within the library's shelves. TheDewey Decimal Systememployed by libraries for the classification of library materials by subject is an early example of metadata usage. The early paper catalog had information regarding whichever item was described on said card: title, author, subject, and a number as to where to find said item.[54]Beginning in the 1980s and 1990s, many libraries replaced these paper file cards with computer databases. These computer databases make it much easier and faster for users to do keyword searches. Another form of older metadata collection is the use by the US Census Bureau of what is known as the "Long Form". The Long Form asks questions that are used to create demographic data to find patterns of distribution.[55]Librariesemploy metadata inlibrary catalogues, most commonly as part of anIntegrated Library Management System. Metadata is obtained bycatalogingresources such as books, periodicals, DVDs, web pages or digital images. This data is stored in the integrated library management system,ILMS, using theMARCmetadata standard. The purpose is to direct patrons to the physical or electronic location of items or areas they seek as well as to provide a description of the item/s in question.
More recent and specialized instances of library metadata include the establishment ofdigital librariesincludinge-printrepositories and digital image libraries. While often based on library principles, the focus on non-librarian use, especially in providing metadata, means they do not follow traditional or common cataloging approaches. Given the custom nature of included materials, metadata fields are often specially created e.g. taxonomic classification fields, location fields, keywords, or copyright statement. Standard file information such as file size and format are usually automatically included.[56]Library operation has for decades been a key topic in efforts towardinternational standardization. Standards for metadata in digital libraries includeDublin Core,METS,MODS,DDI,DOI,URN,PREMISschema,EML, andOAI-PMH. Leading libraries in the world give hints on their metadata standards strategies.[57][58]The use and creation of metadata in library and information science also include scientific publications:
Metadata for scientific publications is often created by journal publishers and citation databases such asPubMedandWeb of Science. The data contained within manuscripts or accompanying them as supplementary material is less often subject to metadata creation,[59][60]though they may be submitted to e.g. biomedical databases after publication. The original authors and database curators then become responsible for metadata creation, with the assistance of automated processes. Comprehensive metadata for all experimental data is the foundation of theFAIR Guiding Principles, or the standards for ensuring research data arefindable,accessible,interoperable, andreusable.[61]
Such metadata can then be utilized, complemented, and made accessible in useful ways.OpenAlexis a free online index of over 200 million scientific documents that integrates and provides metadata such as sources,citations,author information,scientific fields, and research topics. ItsAPIand open source website can be used for metascience,scientometrics, and novel tools that query thissemanticweb ofpapers.[62][63][64]Another project under development,Scholia, uses the metadata of scientific publications for various visualizations and aggregation features such as providing a simple user interface summarizing literature about a specific feature of the SARS-CoV-2 virus usingWikidata's "main subject" property.[65]
In research labor, transparent metadata about authors' contributions to works have been proposed – e.g. the role played in the production of the paper, the level of contribution and the responsibilities.[66][67]
Moreover, various metadata about scientific outputs can be created or complemented – for instance, some organizations attempt to track and link citations of papers as 'Supporting', 'Mentioning' or 'Contrasting' the study.[68]Other examples include developments ofalternative metrics[69]– which, beyond providing help for assessment and findability, also aggregate many of the public discussions about a scientific paper on social media such asReddit,citations on Wikipedia, andreports about the studyin the news media[70]– and a call for showingwhether or not the original findings are confirmed or could get reproduced.[71][72]
Metadata in a museum context is the information that trained cultural documentation specialists, such asarchivists,librarians, museumregistrarsandcurators, create to index, structure, describe, identify, or otherwise specify works of art, architecture, cultural objects and their images.[73][74][75]Descriptive metadata is most commonly used in museum contexts for object identification and resource recovery purposes.[74]
Metadata is developed and applied within collecting institutions and museums in order to:
Many museums and cultural heritage centers recognize that given the diversity of artworks and cultural objects, no single model or standard suffices to describe and catalog cultural works.[73][74][75]For example, a sculpted Indigenous artifact could be classified as an artwork, an archaeological artifact, or an Indigenous heritage item. The early stages of standardization in archiving, description and cataloging within the museum community began in the late 1990s with the development of standards such asCategories for the Description of Works of Art(CDWA), Spectrum,CIDOC Conceptual Reference Model(CRM), Cataloging Cultural Objects (CCO) and the CDWA Lite XML schema.[74]These standards useHTMLandXMLmarkup languages for machine processing, publication and implementation.[74]TheAnglo-American Cataloguing Rules(AACR), originally developed for characterizing books, have also been applied to cultural objects, works of art and architecture.[75]Standards, such as the CCO, are integrated within a Museum'sCollections Management System(CMS), a database through which museums are able to manage their collections, acquisitions, loans and conservation.[75]Scholars and professionals in the field note that the "quickly evolving landscape of standards and technologies" creates challenges for cultural documentarians, specifically non-technically trained professionals.[76][page needed]Most collecting institutions and museums use arelational databaseto categorize cultural works and their images.[75]Relational databases and metadata work to document and describe the complex relationships amongst cultural objects and multi-faceted works of art, as well as between objects and places, people, and artistic movements.[74][75]Relational database structures are also beneficial within collecting institutions and museums because they allow for archivists to make a clear distinction between cultural objects and their images; an unclear distinction could lead to confusing and inaccurate searches.[75]
An object's materiality, function, and purpose, as well as the size (e.g., measurements, such as height, width, weight), storage requirements (e.g., climate-controlled environment), and focus of the museum and collection, influence the descriptive depth of the data attributed to the object by cultural documentarians.[75]The established institutional cataloging practices, goals, and expertise of cultural documentarians and database structure also influence the information ascribed to cultural objects and the ways in which cultural objects are categorized.[73][75]Additionally, museums often employ standardized commercial collection management software that prescribes and limits the ways in which archivists can describe artworks and cultural objects.[76]As well, collecting institutions and museums useControlled Vocabulariesto describe cultural objects and artworks in their collections.[74][75]Getty Vocabularies and the Library of Congress Controlled Vocabularies are reputable within the museum community and are recommended by CCO standards.[75]Museums are encouraged to use controlled vocabularies that are contextual and relevant to their collections and enhance the functionality of their digital information systems.[74][75]Controlled Vocabularies are beneficial within databases because they provide a high level of consistency, improving resource retrieval.[74][75]Metadata structures, including controlled vocabularies, reflect theontologiesof the systems from which they were created. Often the processes through which cultural objects are described and categorized through metadata in museums do not reflect the perspectives of the maker communities.[73][77]
Metadata has been instrumental in the creation of digital information systems and archives within museums and has made it easier for museums to publish digital content online. This has enabled audiences who might not have had access to cultural objects due to geographic or economic barriers to have access to them.[74]In the 2000s, as more museums have adopted archival standards and created intricate databases, discussions aboutLinked Databetween museum databases have come up in the museum, archival, and library science communities.[76]Collection Management Systems (CMS) andDigital Asset Managementtools can be local or shared systems.[75]Digital Humanitiesscholars note many benefits of interoperability between museum databases and collections, while also acknowledging the difficulties of achieving such interoperability.[76]
Problems involving metadata inlitigationin theUnited Statesare becoming widespread.[when?]Courts have looked at various questions involving metadata, including thediscoverabilityof metadata by parties. The Federal Rules of Civil Procedure have specific rules for discovery of electronically stored information, and subsequent case law applying those rules has elucidated on the litigant's duty to produce metadata when litigating in federal court.[78]In October 2009, theArizona Supreme Courthas ruled that metadata records arepublic record.[79]Document metadata have proven particularly important in legal environments in which litigation has requested metadata, that can include sensitive information detrimental to a certain party in court. Usingmetadata removal toolsto "clean" or redact documents can mitigate the risks of unwittingly sending sensitive data. This process partially (seedata remanence) protects law firms from potentially damaging leaking of sensitive data throughelectronic discovery.
Opinion polls have shown that 45% of Americans are "not at all confident" in the ability of social media sites to ensure their personal data is secure and 40% say that social media sites should not be able to store any information on individuals. 76% of Americans say that they are not confident that the information advertising agencies collect on them is secure and 50% say that online advertising agencies should not be allowed to record any of their information at all.[80]
In Australia, the need to strengthen national security has resulted in the introduction of a new metadata storage law.[81]This new law means that both security and policing agencies will be allowed to access up to 2 years of an individual's metadata, with the aim of making it easier to stop any terrorist attacks and serious crimes from happening.
Legislative metadata has been the subject of some discussion inlaw.govforums such as workshops held by theLegal Information Instituteat theCornell Law Schoolon 22 and 23 March 2010. The documentation for these forums is titled, "Suggested metadata practices for legislation and regulations".[82]
A handful of key points have been outlined by these discussions, section headings of which are listed as follows:
Australian medical research pioneered the definition of metadata for applications in health care. That approach offers the first recognized attempt to adhere to international standards in medical sciences instead of defining a proprietary standard under theWorld Health Organization(WHO) umbrella. The medical community yet did not approve of the need to follow metadata standards despite research that supported these standards.[83]
Research studies in the fields ofbiomedicineandmolecular biologyfrequently yield large quantities of data, including results ofgenomeormeta-genomesequencing,proteomicsdata, and even notes or plans created during the course of research itself.[84]Each data type involves its own variety of metadata and the processes necessary to produce these metadata. General metadata standards, such as ISA-Tab,[85]allow researchers to create and exchange experimental metadata in consistent formats. Specific experimental approaches frequently have their own metadata standards and systems: metadata standards formass spectrometryincludemzML[86]and SPLASH,[87]whileXML-based standards such asPDBML[88]and SRA XML[89]serve as standards for macromolecular structure and sequencing data, respectively.
The products of biomedical research are generally realized as peer-reviewed manuscripts and these publications are yet another source of data(see#Science).
Adata warehouse(DW) is a repository of an organization's electronically stored data. Data warehouses are designed to manage and store the data. Data warehouses differ frombusiness intelligence(BI) systems because BI systems are designed to use data to create reports and analyze the information, to provide strategic guidance to management.[90]Metadata is an important tool in how data is stored in data warehouses. The purpose of a data warehouse is to house standardized, structured, consistent, integrated, correct, "cleaned" and timely data, extracted from various operational systems in an organization. The extracted data are integrated in the data warehouse environment to provide an enterprise-wide perspective. Data are structured in a way to serve the reporting and analytic requirements. The design of structural metadata commonality using adata modelingmethod such asentity-relationship modeldiagramming is important in any data warehouse development effort. They detail metadata on each piece of data in the data warehouse. An essential component of adata warehouse/business intelligencesystem is the metadata and tools to manage and retrieve the metadata.Ralph Kimball[91]describes metadata as the DNA of the data warehouse as metadata defines the elements of thedata warehouseand how they work together.
Kimballet al.[92]refers to 3 main categories of metadata: Technical metadata, business metadata and process metadata. Technical metadata is primarilydefinitional, while business metadata and process metadata is primarilydescriptive. The categories sometimes overlap.
TheHTMLformat used to define web pages allows for the inclusion of a variety of types of metadata, from basic descriptive text, dates and keywords to further advanced metadata schemes such as theDublin Core,e-GMS, andAGLS[93]standards. Pages and files can also begeotaggedwithcoordinates, categorized or tagged, including collaboratively such as withfolksonomies.
When media hasidentifiersset or when such can be generated, information such as file tags and descriptions can be pulled orscrapedfrom the Internet – for example about movies.[94]Various online databases are aggregated and provide metadata for various data. The collaboratively builtWikidatahas identifiers not just for media but also abstract concepts, various objects, and other entities, that can be looked up by humans and machines to retrieve useful information and to link knowledge in other knowledge bases and databases.[65]
Metadata may be included in the page's header or in a separate file.Microformatsallow metadata to be added to on-page data in a way that regular web users do not see, but computers,web crawlersandsearch enginescan readily access. Many search engines are cautious about using metadata in their ranking algorithms because of exploitation of metadata and the practice of search engine optimization,SEO, to improve rankings. See theMeta elementarticle for further discussion. This cautious attitude may be justified as people, according to Doctorow,[95]are not executing care and diligence when creating their own metadata and that metadata is part of a competitive environment where the metadata is used to promote the metadata creators own purposes. Studies show that search engines respond to web pages with metadata implementations,[96]and Google has an announcement on its site showing the meta tags that its search engine understands.[97]Enterprise search startupSwiftyperecognizes metadata as a relevance signal that webmasters can implement for their website-specific search engine, even releasing their own extension, known as Meta Tags 2.[98]
In thebroadcastindustry, metadata is linked to audio and videobroadcast mediato:
This metadata can be linked to the video media thanks to thevideo servers. Most major broadcast sporting events likeFIFA World Cupor theOlympic Gamesuse this metadata to distribute their video content toTV stationsthroughkeywords. It is often the host broadcaster[99]who is in charge of organizing metadata through itsInternational Broadcast Centreand its video servers. This metadata is recorded with the images and entered by metadata operators (loggers) who associate in live metadata available inmetadata gridsthroughsoftware(such asMulticam(LSM)orIPDirectorused during the FIFA World Cup or Olympic Games).[100][101]
Metadata that describes geographic objects in electronic storage or format (such as datasets, maps, features, or documents with a geospatial component) has a history dating back to at least 1994. This class of metadata is described more fully on thegeospatial metadataarticle.
Ecological and environmental metadata is intended to document the "who, what, when, where, why, and how" of data collection for a particular study. This typically means which organization or institution collected the data, what type of data, which date(s) the data was collected, the rationale for the data collection, and the methodology used for the data collection. Metadata should be generated in a format commonly used by the most relevant science community, such asDarwin Core,Ecological Metadata Language,[102]orDublin Core. Metadata editing tools exist to facilitate metadata generation (e.g. Metavist,[103]Mercury, Morpho[104]). Metadata should describe theprovenanceof the data (where they originated, as well as any transformations the data underwent) and how to give credit for (cite) the data products.
When first released in 1982, Compact Discs only contained a Table Of Contents (TOC) with the number of tracks on the disc and their length in samples.[105][106]Fourteen years later in 1996, a revision of theCD Red Bookstandard addedCD-Textto carry additional metadata.[107]But CD-Text was not widely adopted. Shortly thereafter, it became common for personal computers to retrieve metadata from external sources (e.g.CDDB,Gracenote) based on the TOC.
Digitalaudioformats such asdigital audio filessuperseded music formats such ascassette tapesandCDsin the 2000s. Digital audio files could be labeled with more information than could be contained in just the file name. That descriptive information is called theaudio tagor audio metadata in general. Computer programs specializing in adding or modifying this information are calledtag editors. Metadata can be used to name, describe, catalog, and indicate ownership or copyright for a digital audio file, and its presence makes it much easier to locate a specific audio file within a group, typically through use of a search engine that accesses the metadata. As different digital audio formats were developed, attempts were made to standardize a specific location within the digital files where this information could be stored.
As a result, almost all digital audio formats, includingmp3, broadcast wav, andAIFFfiles, have similar standardized locations that can be populated with metadata. The metadata for compressed and uncompressed digital music is often encoded in theID3tag. Common editors such asTagLibsupport MP3, Ogg Vorbis, FLAC, MPC, Speex, WavPack TrueAudio, WAV, AIFF, MP4, and ASF file formats.
With the availability ofcloudapplications, which include those to add metadata to content, metadata is increasingly available over the Internet.
Metadata can be stored eitherinternally,[108]in the same file or structure as the data (this is also calledembedded metadata), orexternally, in a separate file or field from the described data. A data repository typically stores the metadatadetachedfrom the data but can be designed to support embedded metadata approaches. Each option has advantages and disadvantages:
Metadata can be stored in either human-readable or binary form. Storing metadata in a human-readable format such asXMLcan be useful because users can understand and edit it without specialized tools.[109]However, text-based formats are rarely optimized for storage capacity, communication time, or processing speed. A binary metadata format enables efficiency in all these respects, but requires special software to convert the binary information into human-readable content.
Each relational database system has its own mechanisms for storing metadata. Examples of relational-database metadata include:
In database terminology, this set of metadata is referred to as thecatalog. TheSQLstandard specifies a uniform means to access the catalog, called theinformation schema, but not all databases implement it, even if they implement other aspects of the SQL standard. For an example of database-specific metadata access methods, seeOracle metadata. Programmatic access to metadata is possible using APIs such asJDBC, or SchemaCrawler.[110]
One of the first satirical examinations of the concept of Metadata as we understand it today is American science fiction authorHal Draper's short story, "MS Fnd in a Lbry" (1961). Here, the knowledge of all Mankind is condensed into an object the size of a desk drawer, however, the magnitude of the metadata (e.g. catalog of catalogs of... , as well as indexes and histories) eventually leads to dire yet humorous consequences for the human race. The story prefigures the modern consequences of allowing metadata to become more important than the real data it is concerned with, and the risks inherent in that eventuality as a cautionary tale.
|
https://en.wikipedia.org/wiki/Metadata
|
Areal-time operating system(RTOS) is anoperating system(OS) forreal-time computingapplications that processes data and events that have critically defined time constraints. A RTOS is distinct from atime-sharingoperating system, such asUnix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in multitasking or multiprogramming environments. All operations must verifiably complete within given time and resource constraints or elsefail safe. Real-time operating systems areevent-drivenandpreemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority.
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application'stask; the variability is "jitter".[1]A "hard" real-time operating system (hard RTOS) has less jitter than a "soft" real-time operating system (soft RTOS); a late answer is a wrong answer in a hard RTOS while a late answer is acceptable in a soft RTOS. The chief design goal is not highthroughput, but rather a guarantee of asoft or hardperformance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadlinedeterministicallyit is a hard real-time OS.[2]
An RTOS has an advanced algorithm forscheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimalinterrupt latencyand minimalthread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time.[3]
An RTOS is an operating system in which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type.
The most common designs are:
Time sharingdesigns switch tasks more often than strictly needed, but give smoothermultitasking, giving the illusion that a process or user has sole use of a machine.
EarlyCPU designsneeded many cycles to switch tasks during which the CPU could do nothing else useful. Because switching took so long, early OSes tried to minimize wasting CPU time by avoiding unnecessary task switching.
In typical designs, a task has three states:
Most tasks are blocked or ready most of the time because generally only one task can run at a time perCPUcore. The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be executed state (resource starvation).
Usually, the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled, but the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then adoubly linked listof ready tasks is likely optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority, so that finding the highest priority task to run does not require traversing the list. Instead, inserting a task requires walking the list.
During this search, preemption should not be inhibited. Long critical sections should be divided into smaller pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority task, that high priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to 30 instructions.
In advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
Some commonly used RTOS scheduling algorithms are:[4]
A multitasking operating system likeUnixis poor at real-time tasks. The scheduler gives the highest priority to jobs with the lowest demand on the computer, so there is no way to ensure that a time-critical job will have access to enough resources. Multitasking systems must manage sharing data and hardware resources among multiple tasks. It is usually unsafe for two tasks to access the same specific data or hardware resource simultaneously.[5]There are three common approaches to resolve this problem:
General-purpose operating systems usually do not allow user programs to mask (disable)interrupts, because the user program could control the CPU for as long as it is made to. Some modern CPUs do not allowuser modecode to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run inkernel modefor greatersystem callefficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
On single-processor systems, an application running in kernel mode and masking interrupts is the lowest overhead method to prevent simultaneous access to a shared resource. While interrupts are masked and the current task does not make a blocking OS call, the current task hasexclusiveuse of the CPU since no other task or interrupt can take control, so thecritical sectionis protected. When the task exits its critical section, it must unmask interrupts; pending interrupts, if any, will then execute. Temporarily masking interrupts should only be done when the longest path through the critical section is shorter than the desired maximuminterrupt latency. Typically this method of protection is used only when the critical section is just a few instructions and contains no loops. This method is ideal for protecting hardware bit-mapped registers when the bits are controlled by different tasks.
When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms also available on general-purpose operating systems, such as amutexand OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors.
A (non-recursive) mutex is eitherlockedor unlocked. When a task has locked the mutex, all other tasks must wait for the mutex to be unlocked by itsowner- the original thread. A task may set a timeout on its wait for a mutex. There are several well-known problems with mutex based designs such aspriority inversionanddeadlocks.
Inpriority inversiona high priority task waits because a low priority task has a mutex, but the lower priority task is not given CPU time to finish its work. A typical solution is to have the task that owns a mutex 'inherit' the priority of the highest waiting task. But this simple approach gets more complex when there are multiple levels of waiting: taskAwaits for a mutex locked by taskB, which waits for a mutex locked by taskC. Handling multiple levels of inheritance causes other code to run in high priority context and thus can cause starvation of medium-priority threads.
In adeadlock, two or more tasks lock mutex without timeouts and then wait forever for the other task's mutex, creating a cyclic dependency. The simplest deadlock scenario occurs when two tasks alternately lock two mutex, but in the opposite order. Deadlock is prevented by careful design.
The other approach to resource sharing is for tasks to send messages in an organizedmessage passingscheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp thansemaphoresystems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.
Since an interrupt handler blocks the highest priority task from running, and since real-time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context.
An OS maintains catalogues of objects it manages such as threads, mutexes, memory, and so on. Updates to this catalogue must be strictly controlled. For this reason, it can be problematic when an interrupt handler calls an OS function while the application is in the act of also doing so. The OS function called from an interrupt handler could find the object database to be in an inconsistent state because of the application's update. There are two major approaches to deal with this problem: the unified architecture and the segmented architecture. RTOSs implementing the unified architecture solve the problem by simply disabling interrupts while the internal catalogue is updated. The downside of this is that interrupt latency increases, potentially losing interrupts. The segmented architecture does not make direct OS calls but delegates the OS related work to a separate handler. This handler runs at a higher priority than any thread but lower than the interrupt handlers. The advantage of this architecture is that it adds very few cycles to interrupt latency. As a result, OSes which implement the segmented architecture are more predictable and can deal with higher interrupt rates compared to the unified architecture.[citation needed]
Similarly, theSystem Management Modeon x86 compatible hardware can take a lot of time before it returns control to the operating system.
Memory allocationis more critical in a real-time operating system than in other operating systems.
First, for stability there cannot bememory leaks(memory that is allocated but not freed after use). The device should work indefinitely, without ever needing a reboot.[citation needed]For this reason,dynamic memory allocationis frowned upon.[citation needed]Whenever possible, all required memory allocation is specified statically at compile time.
Another reason to avoid dynamic memory allocation is memory fragmentation. With frequent allocation and releasing of small chunks of memory, a situation may occur where available memory is divided into several sections and the RTOS cannot allocate a large enough continuous block of memory, although there is enough free memory. Secondly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block,[6]which is unacceptable in a RTOS since memory allocation has to occur within a certain amount of time.
Because mechanical disks have much longer and more unpredictable response times, swapping to disk files is not used for the same reasons as RAM allocation discussed above.
The simplefixed-size-blocks algorithmworks quite well for simpleembedded systemsbecause of its low overhead.
|
https://en.wikipedia.org/wiki/Real-time_operating_system
|
Inprobabilityandstatistics, anexponential familyis aparametricset ofprobability distributionsof a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The termexponential classis sometimes used in place of "exponential family",[1]or the older termKoopman–Darmois family.
Sometimes loosely referred to astheexponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of asufficient statistic.
The concept of exponential families is credited to[2]E. J. G. Pitman,[3]G. Darmois,[4]andB. O. Koopman[5]in 1935–1936. Exponential families of distributions provide a general framework for selecting a possible alternative parameterisation of aparametric familyof distributions, in terms of natural parameters, and for defining usefulsample statistics, called the natural sufficient statistics of the family.
The terms "distribution" and "family" are often used loosely: Specifically,anexponential family is asetof distributions, where the specific distribution varies with the parameter;[a]however, a parametricfamilyof distributions is often referred to as "adistribution" (like "the normal distribution", meaning "the family of normal distributions"), and the set of all exponential families is sometimes loosely referred to as "the" exponential family.
Most of the commonly used distributions form an exponential family or subset of an exponential family, listed in the subsection below. The subsections following it are a sequence of increasingly more general mathematical definitions of an exponential family. A casual reader may wish to restrict attention to the first and simplest definition, which corresponds to a single-parameter family ofdiscreteorcontinuousprobability distributions.
Exponential families include many of the most common distributions. Among many others, exponential families includes the following:[6]
A number of common distributions are exponential families, but only when certain parameters are fixed and known. For example:
Note that in each case, the parameters which must be fixed are those that set a limit on the range of values that can possibly be observed.
Examples of common distributions that arenotexponential families areStudent'st, mostmixture distributions, and even the family ofuniform distributionswhen the bounds are not fixed. See the section below onexamplesfor more discussion.
The value ofθ{\displaystyle \theta }is called theparameterof the family.
A single-parameter exponential family is a set of probability distributions whoseprobability density function(orprobability mass function, for the case of adiscrete distribution) can be expressed in the form
fX(x|θ)=h(x)exp[η(θ)⋅T(x)−A(θ)]{\displaystyle f_{X}{\left(x\,{\big |}\,\theta \right)}=h(x)\,\exp \left[\eta (\theta )\cdot T(x)-A(\theta )\right]}
whereT(x),h(x),η(θ), andA(θ)are known functions. The functionh(x)must be non-negative.
An alternative, equivalent form often given is
fX(x|θ)=h(x)g(θ)exp[η(θ)⋅T(x)]{\displaystyle f_{X}{\left(x\ {\big |}\ \theta \right)}=h(x)\,g(\theta )\,\exp \left[\eta (\theta )\cdot T(x)\right]}
or equivalently
fX(x|θ)=exp[η(θ)⋅T(x)−A(θ)+B(x)].{\displaystyle f_{X}{\left(x\ {\big |}\ \theta \right)}=\exp \left[\eta (\theta )\cdot T(x)-A(\theta )+B(x)\right].}
In terms oflog probability,log(fX(x|θ))=η(θ)⋅T(x)−A(θ)+B(x).{\displaystyle \log(f_{X}{\left(x\ {\big |}\ \theta \right)})=\eta (\theta )\cdot T(x)-A(\theta )+B(x).}
Note thatg(θ)=e−A(θ){\displaystyle g(\theta )=e^{-A(\theta )}}andh(x)=eB(x){\displaystyle h(x)=e^{B(x)}}.
Importantly, thesupportoffX(x|θ){\displaystyle f_{X}{\left(x{\big |}\theta \right)}}(all the possiblex{\displaystyle x}values for whichfX(x|θ){\displaystyle f_{X}\!\left(x{\big |}\theta \right)}is greater than0{\displaystyle 0}) is required tonotdepend onθ.{\displaystyle \theta ~.}[7]This requirement can be used to exclude a parametric family distribution from being an exponential family.
For example: ThePareto distributionhas a pdf which is defined forx≥xm{\displaystyle x\geq x_{\mathsf {m}}}(the minimum value,xm,{\displaystyle x_{m}\ ,}being the scale parameter) and its support, therefore, has a lower limit ofxm.{\displaystyle x_{\mathsf {m}}~.}Since the support offα,xm(x){\displaystyle f_{\alpha ,x_{m}}\!(x)}is dependent on the value of the parameter, the family ofPareto distributionsdoes not form an exponential family of distributions (at least whenxm{\displaystyle x_{m}}is unknown).
Another example:Bernoulli-typedistributions –binomial,negative binomial,geometric distribution, and similar – can only be included in the exponential class if the number ofBernoulli trials,n, is treated as a fixed constant – excluded from the free parameter(s)θ{\displaystyle \theta }– since the allowed number of trials sets the limits for the number of "successes" or "failures" that can be observed in a set of trials.
Oftenx{\displaystyle x}is a vector of measurements, in which caseT(x){\displaystyle T(x)}may be a function from the space of possible values ofx{\displaystyle x}to the real numbers.
More generally,η(θ){\displaystyle \eta (\theta )}andT(x){\displaystyle T(x)}can each be vector-valued such thatη(θ)⋅T(x){\displaystyle \eta (\theta )\cdot T(x)}is real-valued. However, see the discussion below onvector parameters, regarding thecurvedexponential family.
Ifη(θ)=θ,{\displaystyle \eta (\theta )=\theta \ ,}then the exponential family is said to be incanonical form. By defining a transformed parameterη=η(θ),{\displaystyle \eta =\eta (\theta )\ ,}it is always possible to convert an exponential family to canonical form. The canonical form is non-unique, sinceη(θ){\displaystyle \eta (\theta )}can be multiplied by any nonzero constant, provided thatT(x)is multiplied by that constant's reciprocal, or a constantccan be added toη(θ){\displaystyle \eta (\theta )}andh(x)multiplied byexp[−c⋅T(x)]{\displaystyle \exp \left[{-c}\cdot T(x)\,\right]}to offset it. In the special case thatη(θ)=θ{\displaystyle \eta (\theta )=\theta }andT(x) =x, then the family is called anatural exponential family.
Even whenx{\displaystyle x}is a scalar, and there is only a single parameter, the functionsη(θ){\displaystyle \eta (\theta )}andT(x){\displaystyle T(x)}can still be vectors, as described below.
The functionA(θ),{\displaystyle A(\theta )\ ,}or equivalentlyg(θ),{\displaystyle g(\theta )\ ,}is automatically determined once the other functions have been chosen, since it must assume a form that causes the distribution to benormalized(sum or integrate to one over the entire domain). Furthermore, both of these functions can always be written as functions ofη,{\displaystyle \eta \ ,}even whenη(θ){\displaystyle \eta (\theta )}is not aone-to-onefunction, i.e. two or more different values ofθ{\displaystyle \theta }map to the same value ofη(θ),{\displaystyle \eta (\theta )\ ,}and henceη(θ){\displaystyle \eta (\theta )}cannot be inverted. In such a case, all values ofθ{\displaystyle \theta }mapping to the sameη(θ){\displaystyle \eta (\theta )}will also have the same value forA(θ){\displaystyle A(\theta )}andg(θ).{\displaystyle g(\theta )~.}
What is important to note, and what characterizes all exponential family variants, is that the parameter(s) and the observation variable(s) mustfactorize(can be separated into products each of which involves only one type of variable), either directly or within either part (the base or exponent) of anexponentiationoperation. Generally, this means that all of the factors constituting the density or mass function must be of one of the following forms:
f(x),cf(x),[f(x)]c,[f(x)]g(θ),[f(x)]h(x)g(θ),g(θ),cg(θ),[g(θ)]c,[g(θ)]f(x),or[g(θ)]h(x)j(θ),{\displaystyle {\begin{aligned}f(x),&&c^{f(x)},&&{[f(x)]}^{c},&&{[f(x)]}^{g(\theta )},&&{[f(x)]}^{h(x)g(\theta )},\\g(\theta ),&&c^{g(\theta )},&&{[g(\theta )]}^{c},&&{[g(\theta )]}^{f(x)},&&~~{\mathsf {or}}~~{[g(\theta )]}^{h(x)j(\theta )},\end{aligned}}}
wherefandhare arbitrary functions ofx, the observed statistical variable;gandjare arbitrary functions ofθ,{\displaystyle \theta ,}the fixed parameters defining the shape of the distribution; andcis any arbitrary constant expression (i.e. a number or an expression that does not change with eitherxorθ{\displaystyle \theta }).
There are further restrictions on how many such factors can occur. For example, the two expressions:
[f(x)g(θ)]h(x)j(θ),[f(x)]h(x)j(θ)[g(θ)]h(x)j(θ),{\displaystyle {[f(x)g(\theta )]}^{h(x)j(\theta )},\qquad {[f(x)]}^{h(x)j(\theta )}{[g(\theta )]}^{h(x)j(\theta )},}
are the same, i.e. a product of two "allowed" factors. However, when rewritten into the factorized form,
[f(x)g(θ)]h(x)j(θ)=[f(x)]h(x)j(θ)[g(θ)]h(x)j(θ)=exp{[h(x)logf(x)]j(θ)+h(x)[j(θ)logg(θ)]},{\displaystyle {\begin{aligned}{\left[f(x)g(\theta )\right]}^{h(x)j(\theta )}&={\left[f(x)\right]}^{h(x)j(\theta )}{\left[g(\theta )\right]}^{h(x)j(\theta )}\\[4pt]&=\exp \left\{{[h(x)\log f(x)]j(\theta )+h(x)[j(\theta )\log g(\theta )]}\right\},\end{aligned}}}
it can be seen that it cannot be expressed in the required form. (However, a form of this sort is a member of acurved exponential family, which allows multiple factorized terms in the exponent.[citation needed])
To see why an expression of the form
[f(x)]g(θ){\displaystyle {[f(x)]}^{g(\theta )}}
qualifies,[f(x)]g(θ)=eg(θ)logf(x){\displaystyle {[f(x)]}^{g(\theta )}=e^{g(\theta )\log f(x)}}
and hence factorizes inside of the exponent. Similarly,
[f(x)]h(x)g(θ)=eh(x)g(θ)logf(x)=e[h(x)logf(x)]g(θ){\displaystyle {[f(x)]}^{h(x)g(\theta )}=e^{h(x)g(\theta )\log f(x)}=e^{[h(x)\log f(x)]g(\theta )}}
and again factorizes inside of the exponent.
A factor consisting of a sum where both types of variables are involved (e.g. a factor of the form1+f(x)g(θ){\displaystyle 1+f(x)g(\theta )}) cannot be factorized in this fashion (except in some cases where occurring directly in an exponent); this is why, for example, theCauchy distributionandStudent'stdistributionare not exponential families.
The definition in terms of onereal-numberparameter can be extended to onereal-vectorparameter
θ≡[θ1θ2⋯θs]T.{\displaystyle {\boldsymbol {\theta }}\equiv {\begin{bmatrix}\theta _{1}&\theta _{2}&\cdots &\theta _{s}\end{bmatrix}}^{\mathsf {T}}.}
A family of distributions is said to belong to a vector exponential family if the probability density function (or probability mass function, for discrete distributions) can be written as
fX(x∣θ)=h(x)exp(∑i=1sηi(θ)Ti(x)−A(θ)),{\displaystyle f_{X}(x\mid {\boldsymbol {\theta }})=h(x)\,\exp \left(\sum _{i=1}^{s}\eta _{i}({\boldsymbol {\theta }})T_{i}(x)-A({\boldsymbol {\theta }})\right)~,}
or in a more compact form,
fX(x∣θ)=h(x)exp[η(θ)⋅T(x)−A(θ)]{\displaystyle f_{X}(x\mid {\boldsymbol {\theta }})=h(x)\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)-A({\boldsymbol {\theta }})\right]}
This form writes the sum as adot productof vector-valued functionsη(θ){\displaystyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})}andT(x).
An alternative, equivalent form often seen is
fX(x∣θ)=h(x)g(θ)exp[η(θ)⋅T(x)]{\displaystyle f_{X}(x\mid {\boldsymbol {\theta }})=h(x)\,g({\boldsymbol {\theta }})\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)\right]}
As in the scalar valued case, the exponential family is said to be incanonical formif
ηi(θ)=θi,∀i.{\displaystyle \eta _{i}({\boldsymbol {\theta }})=\theta _{i}~,\quad \forall i\,.}
A vector exponential family is said to becurvedif the dimension of
θ≡[θ1θ2⋯θd]T{\displaystyle {\boldsymbol {\theta }}\equiv {\begin{bmatrix}\theta _{1}&\theta _{2}&\cdots &\theta _{d}\end{bmatrix}}^{\mathsf {T}}}
is less than the dimension of the vector
η(θ)≡[η1(θ)η2(θ)⋯ηs(θ)]T.{\displaystyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})\equiv {\begin{bmatrix}\eta _{1}{\!({\boldsymbol {\theta }})}&\eta _{2}{\!({\boldsymbol {\theta }})}&\cdots &\eta _{s}{\!({\boldsymbol {\theta }})}\end{bmatrix}}^{\mathsf {T}}~.}
That is, if thedimension,d, of the parameter vector is less than thenumber of functions,s, of the parameter vector in the above representation of the probability density function. Most common distributions in the exponential family arenotcurved, and many algorithms designed to work with any exponential family implicitly or explicitly assume that the distribution is not curved.
Just as in the case of a scalar-valued parameter, the functionA(θ){\displaystyle A({\boldsymbol {\theta }})}or equivalentlyg(θ){\displaystyle g({\boldsymbol {\theta }})}is automatically determined by the normalization constraint, once the other functions have been chosen. Even ifη(θ){\displaystyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})}is not one-to-one, functionsA(η){\displaystyle A({\boldsymbol {\eta }})}andg(η){\displaystyle g({\boldsymbol {\eta }})}can be defined by requiring that the distribution is normalized for each value of the natural parameterη{\displaystyle {\boldsymbol {\eta }}}. This yields thecanonical form
fX(x∣η)=h(x)exp[η⋅T(x)−A(η)],{\displaystyle f_{X}(x\mid {\boldsymbol {\eta }})=h(x)\exp \left[{\boldsymbol {\eta }}\cdot \mathbf {T} (x)-A({\boldsymbol {\eta }})\right],}
or equivalently
fX(x∣η)=h(x)g(η)exp[η⋅T(x)].{\displaystyle f_{X}(x\mid {\boldsymbol {\eta }})=h(x)g({\boldsymbol {\eta }})\exp \left[{\boldsymbol {\eta }}\cdot \mathbf {T} (x)\right].}
The above forms may sometimes be seen withηTT(x){\displaystyle {\boldsymbol {\eta }}^{\mathsf {T}}\mathbf {T} (x)}in place ofη⋅T(x){\displaystyle {\boldsymbol {\eta }}\cdot \mathbf {T} (x)\,}. These are exactly equivalent formulations, merely using different notation for thedot product.
The vector-parameter form over a single scalar-valued random variable can be trivially expanded to cover a joint distribution over a vector of random variables. The resulting distribution is simply the same as the above distribution for a scalar-valued random variable with each occurrence of the scalarxreplaced by the vector
x=[x1x2⋯xk]T.{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{k}\end{bmatrix}}^{\mathsf {T}}.}
The dimensionskof the random variable need not match the dimensiondof the parameter vector, nor (in the case of a curved exponential function) the dimensionsof the natural parameterη{\displaystyle {\boldsymbol {\eta }}}andsufficient statisticT(x).
The distribution in this case is written as
fX(x∣θ)=h(x)exp[∑i=1sηi(θ)Ti(x)−A(θ)]{\displaystyle f_{X}{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=h(\mathbf {x} )\,\exp \!\left[\sum _{i=1}^{s}\eta _{i}({\boldsymbol {\theta }})T_{i}(\mathbf {x} )-A({\boldsymbol {\theta }})\right]}
Or more compactly as
fX(x∣θ)=h(x)exp[η(θ)⋅T(x)−A(θ)]{\displaystyle f_{X}{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=h(\mathbf {x} )\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\theta }})\right]}
Or alternatively as
fX(x∣θ)=g(θ)h(x)exp[η(θ)⋅T(x)]{\displaystyle f_{X}{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=g({\boldsymbol {\theta }})\,h(\mathbf {x} )\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (\mathbf {x} )\right]}
We usecumulative distribution functions(CDF) in order to encompass both discrete and continuous distributions.
SupposeHis a non-decreasing function of a real variable. ThenLebesgue–Stieltjes integralswith respect todH(x){\displaystyle dH(\mathbf {x} )}are integrals with respect to thereference measureof the exponential family generated byH.
Any member of that exponential family has cumulative distribution function
dF(x∣θ)=exp[η(θ)⋅T(x)−A(θ)]dH(x).{\displaystyle dF{\left(\mathbf {x} \mid {\boldsymbol {\theta }}\right)}=\exp \left[{\boldsymbol {\eta }}(\theta )\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\theta }})\right]~dH(\mathbf {x} )\,.}
H(x)is aLebesgue–Stieltjes integratorfor the reference measure. When the reference measure is finite, it can be normalized andHis actually thecumulative distribution functionof a probability distribution. IfFis absolutely continuous with a densityf(x){\displaystyle f(x)}with respect to a reference measuredx{\displaystyle dx}(typicallyLebesgue measure), one can writedF(x)=f(x)dx{\displaystyle dF(x)=f(x)\,dx}.
In this case,His also absolutely continuous and can be writtendH(x)=h(x)dx{\displaystyle dH(x)=h(x)\,dx}so the formulas reduce to that of the previous paragraphs. IfFis discrete, thenHis astep function(with steps on thesupportofF).
Alternatively, we can write the probability measure directly as
P(dx∣θ)=exp[η(θ)⋅T(x)−A(θ)]μ(dx).{\displaystyle P\left(d\mathbf {x} \mid {\boldsymbol {\theta }}\right)=\exp \left[{\boldsymbol {\eta }}(\theta )\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\theta }})\right]~\mu (d\mathbf {x} )\,.}
for some reference measureμ{\displaystyle \mu \,}.
In the definitions above, the functionsT(x),η(θ), andA(η)were arbitrary. However, these functions have important interpretations in the resulting probability distribution.
The functionAis important in its own right, because themean,varianceand othermomentsof the sufficient statisticT(x)can be derived simply by differentiatingA(η). For example, becauselog(x)is one of the components of the sufficient statistic of thegamma distribution,E[logx]{\displaystyle \operatorname {\mathcal {E}} [\log x]}can be easily determined for this distribution usingA(η). Technically, this is true becauseK(u∣η)=A(η+u)−A(η),{\displaystyle K{\left(u\mid \eta \right)}=A(\eta +u)-A(\eta )\,,}is thecumulant generating functionof the sufficient statistic.
Exponential families have a large number of properties that make them extremely useful for statistical analysis. In many cases, it can be shown thatonlyexponential families have these properties. Examples:
Given an exponential family defined byfX(x∣θ)=h(x)exp[θ⋅T(x)−A(θ)]{\displaystyle f_{X}{\!(x\mid \theta )}=h(x)\exp \left[\theta \cdot T(x)-A(\theta )\right]}, whereΘ{\displaystyle \Theta }is the parameter space, such thatθ∈Θ⊂Rk{\displaystyle \theta \in \Theta \subset \mathbb {R} ^{k}}. Then
It is critical, when considering the examples in this section, to remember the discussion above about what it means to say that a "distribution" is an exponential family, and in particular to keep in mind that the set of parameters that are allowed to vary is critical in determining whether a "distribution" is or is not an exponential family.
Thenormal,exponential,log-normal,gamma,chi-squared,beta,Dirichlet,Bernoulli,categorical,Poisson,geometric,inverse Gaussian,ALAAM,von Mises, andvon Mises-Fisherdistributions are all exponential families.
Some distributions are exponential families only if some of their parameters are held fixed. The family ofPareto distributionswith a fixed minimum boundxmform an exponential family. The families ofbinomialandmultinomialdistributions with fixed number of trialsnbut unknown probability parameter(s) are exponential families. The family ofnegative binomial distributionswith fixed number of failures (a.k.a. stopping-time parameter)ris an exponential family. However, when any of the above-mentioned fixed parameters are allowed to vary, the resulting family is not an exponential family.
As mentioned above, as a general rule, thesupportof an exponential family must remain the same across all parameter settings in the family. This is why the above cases (e.g. binomial with varying number of trials, Pareto with varying minimum bound) are not exponential families — in all of the cases, the parameter in question affects the support (particularly, changing the minimum or maximum possible value). For similar reasons, neither thediscrete uniform distributionnorcontinuous uniform distributionare exponential families as one or both bounds vary.
TheWeibull distributionwith fixed shape parameterkis an exponential family. Unlike in the previous examples, the shape parameter does not affect the support; the fact that allowing it to vary makes the Weibull non-exponential is due rather to the particular form of the Weibull'sprobability density function(kappears in the exponent of an exponent).
In general, distributions that result from a finite or infinitemixtureof other distributions, e.g.mixture modeldensities andcompound probability distributions, arenotexponential families. Examples are typical Gaussianmixture modelsas well as manyheavy-tailed distributionsthat result fromcompounding(i.e. infinitely mixing) a distribution with aprior distributionover one of its parameters, e.g. theStudent'st-distribution(compounding anormal distributionover agamma-distributedprecision prior), and thebeta-binomialandDirichlet-multinomialdistributions. Other examples of distributions that are not exponential families are theF-distribution,Cauchy distribution,hypergeometric distributionandlogistic distribution.
Following are some detailed examples of the representation of some useful distribution as exponential families.
As a first example, consider a random variable distributed normally with unknown meanμandknownvarianceσ2. The probability density function is then
fσ(x;μ)=12πσ2e−(x−μ)2/2σ2.{\displaystyle f_{\sigma }(x;\mu )={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(x-\mu )^{2}/2\sigma ^{2}}.}
This is a single-parameter exponential family, as can be seen by setting
Tσ(x)=xσ,hσ(x)=12πσ2e−x2/2σ2,Aσ(μ)=μ22σ2,ησ(μ)=μσ.{\displaystyle {\begin{aligned}T_{\sigma }(x)&={\frac {x}{\sigma }},&h_{\sigma }(x)&={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-x^{2}/2\sigma ^{2}},\\[4pt]A_{\sigma }(\mu )&={\frac {\mu ^{2}}{2\sigma ^{2}}},&\eta _{\sigma }(\mu )&={\frac {\mu }{\sigma }}.\end{aligned}}}
Ifσ= 1this is in canonical form, as thenη(μ) =μ.
Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then
f(y;μ,σ2)=12πσ2e−(y−μ)2/2σ2.{\displaystyle f(y;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-(y-\mu )^{2}/2\sigma ^{2}}.}
This is an exponential family which can be written in canonical form by defining
h(y)=12π,η=[μσ2,−12σ2],T(y)=(y,y2)T,A(η)=μ22σ2+log|σ|=−η124η2+12log|12η2|{\displaystyle {\begin{aligned}h(y)&={\frac {1}{\sqrt {2\pi }}},&{\boldsymbol {\eta }}&=\left[{\frac {\mu }{\sigma ^{2}}},~-{\frac {1}{2\sigma ^{2}}}\right],\\T(y)&=\left(y,y^{2}\right)^{\mathsf {T}},&A({\boldsymbol {\eta }})&={\frac {\mu ^{2}}{2\sigma ^{2}}}+\log |\sigma |=-{\frac {\eta _{1}^{2}}{4\eta _{2}}}+{\frac {1}{2}}\log \left|{\frac {1}{2\eta _{2}}}\right|\end{aligned}}}
As an example of a discrete exponential family, consider thebinomial distributionwithknownnumber of trialsn. Theprobability mass functionfor this distribution isf(x)=(nx)px(1−p)n−x,x∈{0,1,2,…,n}.{\displaystyle f(x)={\binom {n}{x}}p^{x}{\left(1-p\right)}^{n-x},\quad x\in \{0,1,2,\ldots ,n\}.}This can equivalently be written asf(x)=(nx)exp[xlog(p1−p)+nlog(1−p)],{\displaystyle f(x)={\binom {n}{x}}\exp \left[x\log \left({\frac {p}{1-p}}\right)+n\log(1-p)\right],}which shows that the binomial distribution is an exponential family, whose natural parameter isη=logp1−p.{\displaystyle \eta =\log {\frac {p}{1-p}}.}This function ofpis known aslogit.
The following table shows how to rewrite a number of common distributions as exponential-family distributions with natural parameters. Refer to the flashcards[12]for main exponential families.
For a scalar variable and scalar parameter, the form is as follows:
fX(x∣θ)=h(x)exp[η(θ)T(x)−A(η)]{\displaystyle f_{X}(x\mid \theta )=h(x)\exp \left[\eta ({\theta })T(x)-A(\eta )\right]}
For a scalar variable and vector parameter:
fX(x∣θ)=h(x)exp[η(θ)⋅T(x)−A(η)]fX(x∣θ)=h(x)g(θ)exp[η(θ)⋅T(x)]{\displaystyle {\begin{aligned}f_{X}(x\mid {\boldsymbol {\theta }})&=h(x)\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)-A({\boldsymbol {\eta }})\right]\\[4pt]f_{X}(x\mid {\boldsymbol {\theta }})&=h(x)\,g({\boldsymbol {\theta }})\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (x)\right]\end{aligned}}}
For a vector variable and vector parameter:
fX(x∣θ)=h(x)exp[η(θ)⋅T(x)−A(η)]{\displaystyle f_{X}(\mathbf {x} \mid {\boldsymbol {\theta }})=h(\mathbf {x} )\,\exp \left[{\boldsymbol {\eta }}({\boldsymbol {\theta }})\cdot \mathbf {T} (\mathbf {x} )-A({\boldsymbol {\eta }})\right]}
The above formulas choose the functional form of the exponential-family with a log-partition functionA(η){\displaystyle A({\boldsymbol {\eta }})}. The reason for this is so that themoments of the sufficient statisticscan be calculated easily, simply by differentiating this function. Alternative forms involve either parameterizing this function in terms of the normal parameterθ{\displaystyle {\boldsymbol {\theta }}}instead of the natural parameter, and/or using a factorg(η){\displaystyle g({\boldsymbol {\eta }})}outside of the exponential. The relation between the latter and the former is:A(η)=−logg(η),g(η)=e−A(η){\displaystyle {\begin{aligned}A({\boldsymbol {\eta }})&=-\log g({\boldsymbol {\eta }}),\\[2pt]g({\boldsymbol {\eta }})&=e^{-A({\boldsymbol {\eta }})}\end{aligned}}}To convert between the representations involving the two types of parameter, use the formulas below for writing one type of parameter in terms of the other.
wherelog2refers to theiterated logarithm
This is the inversesoftmax function, a generalization of thelogit function.
1C2[eη1⋮eηk−11]{\displaystyle {\frac {1}{C_{2}}}{\begin{bmatrix}e^{\eta _{1}}\\[5pt]\vdots \\[5pt]e^{\eta _{k-1}}\\[5pt]1\end{bmatrix}}}whereC1=∑i=1keηi{\textstyle C_{1}=\sum \limits _{i=1}^{k}e^{\eta _{i}}}andC2=1+∑i=1k−1eηi{\textstyle C_{2}=1+\sum \limits _{i=1}^{k-1}e^{\eta _{i}}}.
This is thesoftmax function, a generalization of thelogistic function.
whereC=∑i=1keηi{\textstyle C=\sum \limits _{i=1}^{k}e^{\eta _{i}}}
1C2[eη1⋮eηk−11]{\displaystyle {\frac {1}{C_{2}}}{\begin{bmatrix}e^{\eta _{1}}\\[5pt]\vdots \\[5pt]e^{\eta _{k-1}}\\[5pt]1\end{bmatrix}}}
whereC1=∑i=1keηi{\textstyle C_{1}=\sum \limits _{i=1}^{k}e^{\eta _{i}}}andC2=1+∑i=1k−1eηi{\textstyle C_{2}=1+\sum \limits _{i=1}^{k-1}e^{\eta _{i}}}
Three variants with different parameterizations are given, to facilitate computing moments of the sufficient statistics.
The three variants of thecategorical distributionandmultinomial distributionare due to the fact that the parameterspi{\displaystyle p_{i}}are constrained, such that
∑i=1kpi=1.{\displaystyle \sum _{i=1}^{k}p_{i}=1\,.}
Thus, there are onlyk−1{\displaystyle k-1}independent parameters.
Variants 1 and 2 are not actually standard exponential families at all. Rather they arecurved exponential families, i.e. there arek−1{\displaystyle k-1}independent parameters embedded in ak{\displaystyle k}-dimensional parameter space.[13]Many of the standard results for exponential families do not apply to curved exponential families. An example is the log-partition functionA(x){\displaystyle A(x)}, which has the value of 0 in the curved cases. In standard exponential families, the derivatives of this function correspond to the moments (more technically, thecumulants) of the sufficient statistics, e.g. the mean and variance. However, a value of 0 suggests that the mean and variance of all the sufficient statistics are uniformly 0, whereas in fact the mean of thei{\displaystyle i}th sufficient statistic should bepi{\displaystyle p_{i}}. (This does emerge correctly when using the form ofA(x){\displaystyle A(x)}shown in variant 3.)
We start with the normalization of the probability distribution. In general, any non-negative functionf(x) that serves as thekernelof a probability distribution (the part encoding all dependence onx) can be made into a proper distribution bynormalizing: i.e.
p(x)=1Zf(x){\displaystyle p(x)={\frac {1}{Z}}f(x)}
where
Z=∫xf(x)dx.{\displaystyle Z=\int _{x}f(x)\,dx.}
The factorZis sometimes termed thenormalizerorpartition function, based on an analogy tostatistical physics.
In the case of an exponential family wherep(x;η)=g(η)h(x)eη⋅T(x),{\displaystyle p(x;{\boldsymbol {\eta }})=g({\boldsymbol {\eta }})h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)},}
the kernel isK(x)=h(x)eη⋅T(x){\displaystyle K(x)=h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}}and the partition function isZ=∫xh(x)eη⋅T(x)dx.{\displaystyle Z=\int _{x}h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}\,dx.}
Since the distribution must be normalized, we have
1=∫xg(η)h(x)eη⋅T(x)dx=g(η)∫xh(x)eη⋅T(x)dx=g(η)Z.{\displaystyle {\begin{aligned}1&=\int _{x}g({\boldsymbol {\eta }})h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}\,dx\\&=g({\boldsymbol {\eta }})\int _{x}h(x)e^{{\boldsymbol {\eta }}\cdot \mathbf {T} (x)}\,dx\\[1ex]&=g({\boldsymbol {\eta }})Z.\end{aligned}}}
In other words,g(η)=1Z{\displaystyle g({\boldsymbol {\eta }})={\frac {1}{Z}}}or equivalentlyA(η)=−logg(η)=logZ.{\displaystyle A({\boldsymbol {\eta }})=-\log g({\boldsymbol {\eta }})=\log Z.}
This justifies callingAthelog-normalizerorlog-partition function.
Now, themoment-generating functionofT(x)is
MT(u)≡E[exp(uTT(x))∣η]=∫xh(x)exp[(η+u)TT(x)−A(η)]dx=eA(η+u)−A(η){\displaystyle {\begin{aligned}M_{T}(u)&\equiv \operatorname {E} \left[\exp \left(u^{\mathsf {T}}T(x)\right)\mid \eta \right]\\&=\int _{x}h(x)\,\exp \left[(\eta +u)^{\mathsf {T}}T(x)-A(\eta )\right]\,dx\\[1ex]&=e^{A(\eta +u)-A(\eta )}\end{aligned}}}
proving the earlier statement that
K(u∣η)=A(η+u)−A(η){\displaystyle K(u\mid \eta )=A(\eta +u)-A(\eta )}
is thecumulant generating functionforT.
An important subclass of exponential families are thenatural exponential families, which have a similar form for the moment-generating function for the distribution ofx.
In particular, using the properties of the cumulant generating function,
E(Tj)=∂A(η)∂ηj{\displaystyle \operatorname {E} (T_{j})={\frac {\partial A(\eta )}{\partial \eta _{j}}}}
and
cov(Ti,Tj)=∂2A(η)∂ηi∂ηj.{\displaystyle \operatorname {cov} \left(T_{i},\,T_{j}\right)={\frac {\partial ^{2}A(\eta )}{\partial \eta _{i}\,\partial \eta _{j}}}.}
The first two raw moments and all mixed second moments can be recovered from these two identities. Higher-order moments and cumulants are obtained by higher derivatives. This technique is often useful whenTis a complicated function of the data, whose moments are difficult to calculate by integration.
Another way to see this that does not rely on the theory ofcumulantsis to begin from the fact that the distribution of an exponential family must be normalized, and differentiate. We illustrate using the simple case of a one-dimensional parameter, but an analogous derivation holds more generally.
In the one-dimensional case, we havep(x)=g(η)h(x)eηT(x).{\displaystyle p(x)=g(\eta )h(x)e^{\eta T(x)}.}
This must be normalized, so
1=∫xp(x)dx=∫xg(η)h(x)eηT(x)dx=g(η)∫xh(x)eηT(x)dx.{\displaystyle 1=\int _{x}p(x)\,dx=\int _{x}g(\eta )h(x)e^{\eta T(x)}\,dx=g(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx.}
Take thederivativeof both sides with respect toη:
0=g(η)ddη∫xh(x)eηT(x)dx+g′(η)∫xh(x)eηT(x)dx=g(η)∫xh(x)(ddηeηT(x))dx+g′(η)∫xh(x)eηT(x)dx=g(η)∫xh(x)eηT(x)T(x)dx+g′(η)∫xh(x)eηT(x)dx=∫xT(x)g(η)h(x)eηT(x)dx+g′(η)g(η)∫xg(η)h(x)eηT(x)dx=∫xT(x)p(x)dx+g′(η)g(η)∫xp(x)dx=E[T(x)]+g′(η)g(η)=E[T(x)]+ddηlogg(η){\displaystyle {\begin{aligned}0&=g(\eta ){\frac {d}{d\eta }}\int _{x}h(x)e^{\eta T(x)}\,dx+g'(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx\\[1ex]&=g(\eta )\int _{x}h(x)\left({\frac {d}{d\eta }}e^{\eta T(x)}\right)\,dx+g'(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx\\[1ex]&=g(\eta )\int _{x}h(x)e^{\eta T(x)}T(x)\,dx+g'(\eta )\int _{x}h(x)e^{\eta T(x)}\,dx\\[1ex]&=\int _{x}T(x)g(\eta )h(x)e^{\eta T(x)}\,dx+{\frac {g'(\eta )}{g(\eta )}}\int _{x}g(\eta )h(x)e^{\eta T(x)}\,dx\\[1ex]&=\int _{x}T(x)p(x)\,dx+{\frac {g'(\eta )}{g(\eta )}}\int _{x}p(x)\,dx\\[1ex]&=\operatorname {E} [T(x)]+{\frac {g'(\eta )}{g(\eta )}}\\[1ex]&=\operatorname {E} [T(x)]+{\frac {d}{d\eta }}\log g(\eta )\end{aligned}}}
Therefore,E[T(x)]=−ddηlogg(η)=ddηA(η).{\displaystyle \operatorname {E} [T(x)]=-{\frac {d}{d\eta }}\log g(\eta )={\frac {d}{d\eta }}A(\eta ).}
As an introductory example, consider thegamma distribution, whose distribution is defined by
p(x)=βαΓ(α)xα−1e−βx.{\displaystyle p(x)={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}.}
Referring to the above table, we can see that the natural parameter is given by
η1=α−1,η2=−β,{\displaystyle {\begin{aligned}\eta _{1}&=\alpha -1,\\\eta _{2}&=-\beta ,\end{aligned}}}
the reverse substitutions are
α=η1+1,β=−η2,{\displaystyle {\begin{aligned}\alpha &=\eta _{1}+1,\\\beta &=-\eta _{2},\end{aligned}}}
the sufficient statistics are(logx, x), and the log-partition function is
A(η1,η2)=logΓ(η1+1)−(η1+1)log(−η2).{\displaystyle A(\eta _{1},\eta _{2})=\log \Gamma (\eta _{1}+1)-(\eta _{1}+1)\log(-\eta _{2}).}
We can find the mean of the sufficient statistics as follows. First, forη1:
E[logx]=∂∂η1A(η1,η2)=∂∂η1[logΓ(η1+1)−(η1+1)log(−η2)]=ψ(η1+1)−log(−η2)=ψ(α)−logβ,{\displaystyle {\begin{aligned}\operatorname {E} [\log x]&={\frac {\partial }{\partial \eta _{1}}}A(\eta _{1},\eta _{2})\\[0.5ex]&={\frac {\partial }{\partial \eta _{1}}}\left[\log \Gamma (\eta _{1}+1)-(\eta _{1}+1)\log(-\eta _{2})\right]\\[1ex]&=\psi (\eta _{1}+1)-\log(-\eta _{2})\\[1ex]&=\psi (\alpha )-\log \beta ,\end{aligned}}}
Whereψ(x){\displaystyle \psi (x)}is thedigamma function(derivative of log gamma), and we used the reverse substitutions in the last step.
Now, forη2:
E[x]=∂∂η2A(η1,η2)=∂∂η2[logΓ(η1+1)−(η1+1)log(−η2)]=−(η1+1)1−η2(−1)=η1+1−η2=αβ,{\displaystyle {\begin{aligned}\operatorname {E} [x]&={\frac {\partial }{\partial \eta _{2}}}A(\eta _{1},\eta _{2})\\[1ex]&={\frac {\partial }{\partial \eta _{2}}}\left[\log \Gamma (\eta _{1}+1)-(\eta _{1}+1)\log(-\eta _{2})\right]\\[1ex]&=-(\eta _{1}+1){\frac {1}{-\eta _{2}}}(-1)={\frac {\eta _{1}+1}{-\eta _{2}}}={\frac {\alpha }{\beta }},\end{aligned}}}
again making the reverse substitution in the last step.
To compute the variance ofx, we just differentiate again:
Var(x)=∂2∂η22A(η1,η2)=∂∂η2η1+1−η2=η1+1η22=αβ2.{\displaystyle {\begin{aligned}\operatorname {Var} (x)&={\frac {\partial ^{2}}{\partial \eta _{2}^{2}}}A{\left(\eta _{1},\eta _{2}\right)}={\frac {\partial }{\partial \eta _{2}}}{\frac {\eta _{1}+1}{-\eta _{2}}}\\[1ex]&={\frac {\eta _{1}+1}{\eta _{2}^{2}}}={\frac {\alpha }{\beta ^{2}}}.\end{aligned}}}
All of these calculations can be done using integration, making use of various properties of thegamma function, but this requires significantly more work.
As another example consider a real valued random variableXwith density
pθ(x)=θe−x(1+e−x)θ+1{\displaystyle p_{\theta }(x)={\frac {\theta e^{-x}}{\left(1+e^{-x}\right)^{\theta +1}}}}
indexed by shape parameterθ∈(0,∞){\displaystyle \theta \in (0,\infty )}(this is called theskew-logistic distribution). The density can be rewritten as
e−x1+e−xexp[−θlog(1+e−x)+log(θ)]{\displaystyle {\frac {e^{-x}}{1+e^{-x}}}\exp[-\theta \log \left(1+e^{-x})+\log(\theta )\right]}
Notice this is an exponential family with natural parameter
η=−θ,{\displaystyle \eta =-\theta ,}
sufficient statistic
T=log(1+e−x),{\displaystyle T=\log \left(1+e^{-x}\right),}
and log-partition function
A(η)=−log(θ)=−log(−η){\displaystyle A(\eta )=-\log(\theta )=-\log(-\eta )}
So using the first identity,
E[log(1+e−X)]=E(T)=∂A(η)∂η=∂∂η[−log(−η)]=1−η=1θ,{\displaystyle \operatorname {E} \left[\log \left(1+e^{-X}\right)\right]=\operatorname {E} (T)={\frac {\partial A(\eta )}{\partial \eta }}={\frac {\partial }{\partial \eta }}[-\log(-\eta )]={\frac {1}{-\eta }}={\frac {1}{\theta }},}
and using the second identity
var[log(1+e−X)]=∂2A(η)∂η2=∂∂η[1−η]=1(−η)2=1θ2.{\displaystyle \operatorname {var} \left[\log \left(1+e^{-X}\right)\right]={\frac {\partial ^{2}A(\eta )}{\partial \eta ^{2}}}={\frac {\partial }{\partial \eta }}\left[{\frac {1}{-\eta }}\right]={\frac {1}{{\left(-\eta \right)}^{2}}}={\frac {1}{\theta ^{2}}}.}
This example illustrates a case where using this method is very simple, but the direct calculation would be nearly impossible.
The final example is one where integration would be extremely difficult. This is the case of theWishart distribution, which is defined over matrices. Even taking derivatives is a bit tricky, as it involvesmatrix calculus, but the respective identities are listed in that article.
From the above table, we can see that the natural parameter is given by
η1=−12V−1,η2=−12(n−p−1),{\displaystyle {\begin{aligned}{\boldsymbol {\eta }}_{1}&=-{\tfrac {1}{2}}\mathbf {V} ^{-1},\\\eta _{2}&={\hphantom {-}}{\tfrac {1}{2}}\left(n-p-1\right),\end{aligned}}}
the reverse substitutions are
V=−12η1−1,n=2η2+p+1,{\displaystyle {\begin{aligned}\mathbf {V} &=-{\tfrac {1}{2}}{\boldsymbol {\eta }}_{1}^{-1},\\n&=2\eta _{2}+p+1,\end{aligned}}}
and the sufficient statistics are(X,log|X|).{\displaystyle (\mathbf {X} ,\log |\mathbf {X} |).}
The log-partition function is written in various forms in the table, to facilitate differentiation and back-substitution. We use the following forms:
A(η1,n)=−n2log|−η1|+logΓp(n2),A(V,η2)=(η2+p+12)log(2p|V|)+logΓp(η2+p+12).{\displaystyle {\begin{aligned}A({\boldsymbol {\eta }}_{1},n)&=-{\frac {n}{2}}\log \left|-{\boldsymbol {\eta }}_{1}\right|+\log \Gamma _{p}{\left({\frac {n}{2}}\right)},\\[1ex]A(\mathbf {V} ,\eta _{2})&=\left(\eta _{2}+{\frac {p+1}{2}}\right)\log \left(2^{p}\left|\mathbf {V} \right|\right)+\log \Gamma _{p}{\left(\eta _{2}+{\frac {p+1}{2}}\right)}.\end{aligned}}}
To differentiate with respect toη1, we need the followingmatrix calculusidentity:
∂log|aX|∂X=(X−1)T{\displaystyle {\frac {\partial \log |a\mathbf {X} |}{\partial \mathbf {X} }}=(\mathbf {X} ^{-1})^{\mathsf {T}}}
Then:
E[X]=∂∂η1A(η1,…)=∂∂η1[−n2log|−η1|+logΓp(n2)]=−n2(η1−1)T=n2(−η1−1)T=n(V)T=nV{\displaystyle {\begin{aligned}\operatorname {E} [\mathbf {X} ]&={\frac {\partial }{\partial {\boldsymbol {\eta }}_{1}}}A\left({\boldsymbol {\eta }}_{1},\ldots \right)\\[1ex]&={\frac {\partial }{\partial {\boldsymbol {\eta }}_{1}}}\left[-{\frac {n}{2}}\log \left|-{\boldsymbol {\eta }}_{1}\right|+\log \Gamma _{p}{\left({\frac {n}{2}}\right)}\right]\\[1ex]&=-{\frac {n}{2}}({\boldsymbol {\eta }}_{1}^{-1})^{\mathsf {T}}\\[1ex]&={\frac {n}{2}}(-{\boldsymbol {\eta }}_{1}^{-1})^{\mathsf {T}}\\[1ex]&=n(\mathbf {V} )^{\mathsf {T}}\\[1ex]&=n\mathbf {V} \end{aligned}}}
The last line uses the fact thatVis symmetric, and therefore it is the same when transposed.
Now, forη2, we first need to expand the part of the log-partition function that involves themultivariate gamma function:
logΓp(a)=log(πp(p−1)4∏j=1pΓ(a+1−j2))=p(p−1)4logπ+∑j=1plogΓ(a+1−j2){\displaystyle {\begin{aligned}\log \Gamma _{p}(a)&=\log \left(\pi ^{\frac {p(p-1)}{4}}\prod _{j=1}^{p}\Gamma {\left(a+{\frac {1-j}{2}}\right)}\right)\\&={\frac {p(p-1)}{4}}\log \pi +\sum _{j=1}^{p}\log \Gamma {\left(a+{\frac {1-j}{2}}\right)}\end{aligned}}}
We also need thedigamma function:
ψ(x)=ddxlogΓ(x).{\displaystyle \psi (x)={\frac {d}{dx}}\log \Gamma (x).}
Then:
E[log|X|]=∂∂η2A(…,η2)=∂∂η2[−(η2+p+12)log(2p|V|)+logΓp(η2+p+12)]=∂∂η2[(η2+p+12)log(2p|V|)]+∂∂η2[p(p−1)4logπ]=+∂∂η2∑j=1plogΓ(η2+p+12+1−j2)=plog2+log|V|+∑j=1pψ(η2+p+12+1−j2)=plog2+log|V|+∑j=1pψ(n−p−12+p+12+1−j2)=plog2+log|V|+∑j=1pψ(n+1−j2){\displaystyle {\begin{aligned}\operatorname {E} [\log |\mathbf {X} |]&={\frac {\partial }{\partial \eta _{2}}}A\left(\ldots ,\eta _{2}\right)\\[1ex]&={\frac {\partial }{\partial \eta _{2}}}\left[-\left(\eta _{2}+{\frac {p+1}{2}}\right)\log \left(2^{p}\left|\mathbf {V} \right|\right)+\log \Gamma _{p}{\left(\eta _{2}+{\frac {p+1}{2}}\right)}\right]\\[1ex]&={\frac {\partial }{\partial \eta _{2}}}\left[\left(\eta _{2}+{\frac {p+1}{2}}\right)\log \left(2^{p}\left|\mathbf {V} \right|\right)\right]+{\frac {\partial }{\partial \eta _{2}}}\left[{\frac {p(p-1)}{4}}\log \pi \right]\\&{\hphantom {=}}+{\frac {\partial }{\partial \eta _{2}}}\sum _{j=1}^{p}\log \Gamma {\left(\eta _{2}+{\frac {p+1}{2}}+{\frac {1-j}{2}}\right)}\\[1ex]&=p\log 2+\log |\mathbf {V} |+\sum _{j=1}^{p}\psi {\left(\eta _{2}+{\frac {p+1}{2}}+{\frac {1-j}{2}}\right)}\\[1ex]&=p\log 2+\log |\mathbf {V} |+\sum _{j=1}^{p}\psi {\left({\frac {n-p-1}{2}}+{\frac {p+1}{2}}+{\frac {1-j}{2}}\right)}\\[1ex]&=p\log 2+\log |\mathbf {V} |+\sum _{j=1}^{p}\psi {\left({\frac {n+1-j}{2}}\right)}\end{aligned}}}
This latter formula is listed in theWishart distributionarticle. Both of these expectations are needed when deriving thevariational Bayesupdate equations in aBayes networkinvolving a Wishart distribution (which is theconjugate priorof themultivariate normal distribution).
Computing these formulas using integration would be much more difficult. The first one, for example, would require matrix integration.
Therelative entropy(Kullback–Leibler divergence, KL divergence) of two distributions in an exponential family has a simple expression as theBregman divergencebetween the natural parameters with respect to the log-normalizer.[14]The relative entropy is defined in terms of an integral, while the Bregman divergence is defined in terms of a derivative and inner product, and thus is easier to calculate and has aclosed-form expression(assuming the derivative has a closed-form expression). Further, the Bregman divergence in terms of the natural parameters and the log-normalizer equals the Bregman divergence of the dual parameters (expectation parameters), in the opposite order, for theconvex conjugatefunction.[15]
Fixing an exponential family with log-normalizerA{\displaystyle A}(with convex conjugateA∗{\displaystyle A^{*}}), writingPA,θ{\displaystyle P_{A,\theta }}for the distribution in this family corresponding a fixed value of the natural parameterθ{\displaystyle \theta }(writingθ′{\displaystyle \theta '}for another value, and withη,η′{\displaystyle \eta ,\eta '}for the corresponding dual expectation/moment parameters), writingKLfor the KL divergence, andBA{\displaystyle B_{A}}for the Bregman divergence, the divergences are related as:KL(PA,θ∥PA,θ′)=BA(θ′∥θ)=BA∗(η∥η′).{\displaystyle \operatorname {KL} (P_{A,\theta }\parallel P_{A,\theta '})=B_{A}(\theta '\parallel \theta )=B_{A^{*}}(\eta \parallel \eta ').}
The KL divergence is conventionally written with respect to thefirstparameter, while the Bregman divergence is conventionally written with respect to thesecondparameter, and thus this can be read as "the relative entropy is equal to the Bregman divergence defined by the log-normalizer on the swapped natural parameters", or equivalently as "equal to the Bregman divergence defined by the dual to the log-normalizer on the expectation parameters".
Exponential families arise naturally as the answer to the following question: what is themaximum-entropydistribution consistent with given constraints on expected values?
Theinformation entropyof a probability distributiondF(x)can only be computed with respect to some other probability distribution (or, more generally, a positive measure), and bothmeasuresmust be mutuallyabsolutely continuous. Accordingly, we need to pick areference measuredH(x)with the same support asdF(x).
The entropy ofdF(x)relative todH(x)is
S[dF∣dH]=−∫dFdHlogdFdHdH{\displaystyle S[dF\mid dH]=-\int {\frac {dF}{dH}}\log {\frac {dF}{dH}}\,dH}
or
S[dF∣dH]=∫logdHdFdF{\displaystyle S[dF\mid dH]=\int \log {\frac {dH}{dF}}\,dF}
wheredF/dHanddH/dFareRadon–Nikodym derivatives. The ordinary definition of entropy for a discrete distribution supported on a setI, namely
S=−∑i∈Ipilogpi{\displaystyle S=-\sum _{i\in I}p_{i}\log p_{i}}
assumes, though this is seldom pointed out, thatdHis chosen to be thecounting measureonI.
Consider now a collection of observable quantities (random variables)Ti. The probability distributiondFwhose entropy with respect todHis greatest, subject to the conditions that the expected value ofTibe equal toti, is an exponential family withdHas reference measure and(T1, ...,Tn)as sufficient statistic.
The derivation is a simplevariational calculationusingLagrange multipliers. Normalization is imposed by lettingT0= 1be one of the constraints. The natural parameters of the distribution are the Lagrange multipliers, and the normalization factor is the Lagrange multiplier associated toT0.
For examples of such derivations, seeMaximum entropy probability distribution.
According to thePitman–Koopman–Darmoistheorem, among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there asufficient statisticwhose dimension remains bounded as sample size increases.
Less tersely, supposeXk, (wherek= 1, 2, 3, ...n) areindependent, identically distributed random variables. Only if their distribution is one of theexponential familyof distributions is there asufficient statisticT(X1, ...,Xn)whosenumberofscalar componentsdoes not increase as the sample sizenincreases; the statisticTmay be avectoror asingle scalar number, but whatever it is, itssizewill neither grow nor shrink when more data are obtained.
As a counterexample if these conditions are relaxed, the family ofuniform distributions(eitherdiscreteorcontinuous, with either or both bounds unknown) has a sufficient statistic, namely the sample maximum, sample minimum, and sample size, but does not form an exponential family, as the domain varies with the parameters.
Exponential families are also important inBayesian statistics. In Bayesian statistics aprior distributionis multiplied by alikelihood functionand then normalised to produce aposterior distribution. In the case of a likelihood which belongs to an exponential family there exists aconjugate prior, which is often also in an exponential family. A conjugate prior π for the parameterη{\displaystyle {\boldsymbol {\eta }}}of an exponential family
f(x∣η)=h(x)exp[ηTT(x)−A(η)]{\displaystyle f(x\mid {\boldsymbol {\eta }})=h(x)\,\exp \left[{\boldsymbol {\eta }}^{\mathsf {T}}\mathbf {T} (x)-A({\boldsymbol {\eta }})\right]}
is given by
pπ(η∣χ,ν)=f(χ,ν)exp[ηTχ−νA(η)],{\displaystyle p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )=f({\boldsymbol {\chi }},\nu )\,\exp \left[{\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }}-\nu A({\boldsymbol {\eta }})\right],}
or equivalently
pπ(η∣χ,ν)=f(χ,ν)g(η)νexp(ηTχ),χ∈Rs{\displaystyle p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )=f({\boldsymbol {\chi }},\nu )\,g({\boldsymbol {\eta }})^{\nu }\,\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }}\right),\qquad {\boldsymbol {\chi }}\in \mathbb {R} ^{s}}
wheresis the dimension ofη{\displaystyle {\boldsymbol {\eta }}}andν>0{\displaystyle \nu >0}andχ{\displaystyle {\boldsymbol {\chi }}}arehyperparameters(parameters controlling parameters).ν{\displaystyle \nu }corresponds to the effective number of observations that the prior distribution contributes, andχ{\displaystyle {\boldsymbol {\chi }}}corresponds to the total amount that these pseudo-observations contribute to thesufficient statisticover all observations and pseudo-observations.f(χ,ν){\displaystyle f({\boldsymbol {\chi }},\nu )}is anormalization constantthat is automatically determined by the remaining functions and serves to ensure that the given function is aprobability density function(i.e. it isnormalized).A(η){\displaystyle A({\boldsymbol {\eta }})}and equivalentlyg(η){\displaystyle g({\boldsymbol {\eta }})}are the same functions as in the definition of the distribution over which π is the conjugate prior.
A conjugate prior is one which, when combined with the likelihood and normalised, produces a posterior distribution which is of the same type as the prior. For example, if one is estimating the success probability of a binomial distribution, then if one chooses to use a beta distribution as one's prior, the posterior is another beta distribution. This makes the computation of the posterior particularly simple. Similarly, if one is estimating the parameter of aPoisson distributionthe use of a gamma prior will lead to another gamma posterior. Conjugate priors are often very flexible and can be very convenient. However, if one's belief about the likely value of the theta parameter of a binomial is represented by (say) a bimodal (two-humped) prior distribution, then this cannot be represented by a beta distribution. It can however be represented by using amixture densityas the prior, here a combination of two beta distributions; this is a form ofhyperprior.
An arbitrary likelihood will not belong to an exponential family, and thus in general no conjugate prior exists. The posterior will then have to be computed by numerical methods.
To show that the above prior distribution is a conjugate prior, we can derive the posterior.
First, assume that the probability of a single observation follows an exponential family, parameterized using its natural parameter:
pF(x∣η)=h(x)g(η)exp[ηTT(x)]{\displaystyle p_{F}(x\mid {\boldsymbol {\eta }})=h(x)\,g({\boldsymbol {\eta }})\,\exp \left[{\boldsymbol {\eta }}^{\mathsf {T}}\mathbf {T} (x)\right]}
Then, for dataX=(x1,…,xn){\displaystyle \mathbf {X} =(x_{1},\ldots ,x_{n})}, the likelihood is computed as follows:
p(X∣η)=(∏i=1nh(xi))g(η)nexp(ηT∑i=1nT(xi)){\displaystyle p(\mathbf {X} \mid {\boldsymbol {\eta }})=\left(\prod _{i=1}^{n}h(x_{i})\right)g({\boldsymbol {\eta }})^{n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)}
Then, for the above conjugate prior:
pπ(η∣χ,ν)=f(χ,ν)g(η)νexp(ηTχ)∝g(η)νexp(ηTχ){\displaystyle {\begin{aligned}p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )&=f({\boldsymbol {\chi }},\nu )g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\propto g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\end{aligned}}}
We can then compute the posterior as follows:
p(η∣X,χ,ν)∝p(X∣η)pπ(η∣χ,ν)=(∏i=1nh(xi))g(η)nexp(ηT∑i=1nT(xi))f(χ,ν)g(η)νexp(ηTχ)∝g(η)nexp(ηT∑i=1nT(xi))g(η)νexp(ηTχ)∝g(η)ν+nexp(ηT(χ+∑i=1nT(xi))){\displaystyle {\begin{aligned}p({\boldsymbol {\eta }}\mid \mathbf {X} ,{\boldsymbol {\chi }},\nu )&\propto p(\mathbf {X} \mid {\boldsymbol {\eta }})p_{\pi }({\boldsymbol {\eta }}\mid {\boldsymbol {\chi }},\nu )\\&=\left(\prod _{i=1}^{n}h(x_{i})\right)g({\boldsymbol {\eta }})^{n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)f({\boldsymbol {\chi }},\nu )g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\\&\propto g({\boldsymbol {\eta }})^{n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)g({\boldsymbol {\eta }})^{\nu }\exp({\boldsymbol {\eta }}^{\mathsf {T}}{\boldsymbol {\chi }})\\&\propto g({\boldsymbol {\eta }})^{\nu +n}\exp \left({\boldsymbol {\eta }}^{\mathsf {T}}\left({\boldsymbol {\chi }}+\sum _{i=1}^{n}\mathbf {T} (x_{i})\right)\right)\end{aligned}}}
The last line is thekernelof the posterior distribution, i.e.
p(η∣X,χ,ν)=pπ(η|χ+∑i=1nT(xi),ν+n){\displaystyle p({\boldsymbol {\eta }}\mid \mathbf {X} ,{\boldsymbol {\chi }},\nu )=p_{\pi }\left({\boldsymbol {\eta }}\left|~{\boldsymbol {\chi }}+\sum _{i=1}^{n}\mathbf {T} (x_{i}),\nu +n\right.\right)}
This shows that the posterior has the same form as the prior.
The dataXenters into this equationonlyin the expression
T(X)=∑i=1nT(xi),{\displaystyle \mathbf {T} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {T} (x_{i}),}
which is termed thesufficient statisticof the data. That is, the value of the sufficient statistic is sufficient to completely determine the posterior distribution. The actual data points themselves are not needed, and all sets of data points with the same sufficient statistic will have the same distribution. This is important because the dimension of the sufficient statistic does not grow with the data size — it has only as many components as the components ofη{\displaystyle {\boldsymbol {\eta }}}(equivalently, the number of parameters of the distribution of a single data point).
The update equations are as follows:
χ′=χ+T(X)=χ+∑i=1nT(xi)ν′=ν+n{\displaystyle {\begin{aligned}{\boldsymbol {\chi }}'&={\boldsymbol {\chi }}+\mathbf {T} (\mathbf {X} )\\&={\boldsymbol {\chi }}+\sum _{i=1}^{n}\mathbf {T} (x_{i})\\\nu '&=\nu +n\end{aligned}}}
This shows that the update equations can be written simply in terms of the number of data points and thesufficient statisticof the data. This can be seen clearly in the various examples of update equations shown in theconjugate priorpage. Because of the way that the sufficient statistic is computed, it necessarily involves sums of components of the data (in some cases disguised as products or other forms — a product can be written in terms of a sum oflogarithms). The cases where the update equations for particular distributions don't exactly match the above forms are cases where the conjugate prior has been expressed using a differentparameterizationthan the one that produces a conjugate prior of the above form — often specifically because the above form is defined over the natural parameterη{\displaystyle {\boldsymbol {\eta }}}while conjugate priors are usually defined over the actual parameterθ.{\displaystyle {\boldsymbol {\theta }}.}
If the likelihoodz|η∼eηzf1(η)f0(z){\displaystyle z|\eta \sim e^{\eta z}f_{1}(\eta )f_{0}(z)}is an exponential family, then the unbiased estimator ofη{\displaystyle \eta }is−ddzlnf0(z){\displaystyle -{\frac {d}{dz}}\ln f_{0}(z)}.[16]
A one-parameter exponential family has a monotone non-decreasing likelihood ratio in thesufficient statisticT(x), provided thatη(θ)is non-decreasing. As a consequence, there exists auniformly most powerful testfortesting the hypothesisH0:θ≥θ0vs.H1:θ<θ0.
Exponential families form the basis for the distribution functions used ingeneralized linear models(GLM), a class of model that encompasses many of the commonly used regression models in statistics. Examples includelogistic regressionusing the binomial family andPoisson regression.
|
https://en.wikipedia.org/wiki/Exponential_family
|
In the computer industry,vaporware(orvapourware) is a product, typically computerhardwareorsoftware, that is announced to the general public but is late, never actually manufactured, or officially canceled. Use of the word has broadened to include products such as automobiles.
Vaporware is often announced months or years before its purported release, with few details about its development being released. Developers have been accused of intentionally promoting vaporware to keep customers from switching to competing products that offer more features.[1]Network Worldmagazine called vaporware an "epidemic" in 1989 and blamed the press for not investigating if developers' claims were true. Seven major companies issued a report in 1990 saying that they felt vaporware had hurt the industry's credibility. The United States accused several companies of announcing vaporware early enough to violateantitrust laws, but few have been found guilty.
"Vaporware" was coined by aMicrosoftengineer in 1982 to describe the company'sXenix operating systemand appeared in print at least as early as the May 1983 issue ofSinclair Usermagazine (spelled 'Vapourware' in UK English).[2]It became popular among writers in the industry as a way to describe products they felt took too long to be released.InfoWorldmagazine editor Stewart Alsop helped popularize it by lampooningBill Gateswith aGolden Vaporwareaward for the late release of his company's first version ofWindowsin 1985.
"Vaporware", sometimes synonymous with "vaportalk" in the 1980s,[3]has no single definition. It is generally used to describe a hardware or software product that has been announced, but that the developer is unlikely to release any time soon, if ever.[4][5]
The first reported use of the word was in 1982 by an engineer at the computer software companyMicrosoft.[6]Ann Winblad, president ofOpen Systems Accounting Software, wanted to know if Microsoft planned to stop developing itsXenixoperating systemas some of Open System's products depended on it. She asked two Microsoft software engineers, John Ulett and Mark Ursino, who confirmed that development of Xenix had stopped. "One of them told me, 'Basically, it's vaporware'," she later said. Winblad compared the word to the idea of "selling smoke", implying Microsoft was selling a product it would soon not support.[3]
Winblad described the word to influential computer expertEsther Dyson,[3]who published it for the first time in her monthly newsletterRELease 1.0. In an article titled "Vaporware" in the November 1983 issue ofRELease 1.0, Dyson defined the word as "good ideas incompletely implemented". She described three software products shown atCOMDEXin Las Vegas that year with bombastic advertisements. She stated that demonstrations of the "purported revolutions, breakthroughs and new generations" at the exhibition did not meet those claims.[4][7]
The practice existed before Winblad's account. In a January 1982 review of the newIBM Personal Computer,BYTEfavorably noted that IBM "refused to acknowledge the existence of any product that is not ready to be put on dealers' shelves tomorrow. Although this is frustrating at times, it is a refreshing change from some companies' practice of announcing a product even before its design is finished".[8]When discussingColeco's delay in releasing theAdam,Creative Computingin March 1984 stated that the company "did not invent the common practice of debuting products before they actually exist. In microcomputers, to do so otherwise would be to break with a veritable tradition".[9]Recalling that aLanier Business Productsword processorbecame available immediately after its announcement,Creative Computingwrote that year, "If we were to re-enact that scene today, I wouldn't get my machine for at least six months, maybe a year".[10]
After Dyson's article, the word "vaporware" became popular among writers in the personal computer software industry as a way to describe products they believed took too long to be released after their first announcement.[6]InfoWorldmagazine editor Stewart Alsop helped popularize its use by givingBill Gates, then-CEO of Microsoft, with aGolden Vaporwareaward for Microsoft releasingWindowsin 1985, 18 months late. Alsop presented it to Gates at a celebration for the release while the song "The Impossible Dream" played in the background.[11][12]
"Vaporware" took another meaning when it was used to describe a product that did not exist. A new company namedOvation Technologiesannounced itsoffice suiteOvation in 1983.[13]The company invested in an advertising campaign that promoted Ovation as a "great innovation", and showed a demonstration of the program at computer trade shows.[6][14]The demonstration was well received by writers in the press, was featured in a cover story for an industry magazine, and reportedly created anticipation among potential customers.[14]Executives later revealed that Ovation never existed. The company created the fake demonstration in an unsuccessful attempt to raise money to finish their product,[13]and is "widely considered the mother of all vaporware," according to Laurie Flynn ofThe New York Times.[6]
Use of the term spread beyond the computer industry.Newsweekmagazine'sAllan Sloandescribed the manipulation of stocks byYahoo!andAmazon.comas "financial vaporware" in 1997.[15]Popular Sciencemagazine uses a scale ranging from "vaporware" to "bet on it" to describe release dates of new consumer electronics.[16]Car manufacturerGeneral Motors' plans to develop and sell an electric car were called vaporware by an advocacy group in 2008[17]andCar and Drivermagazine retroactively described theVector W8supercar as vaporware in 2017.[18]
The term is like ascarlet letterhung around the neck of software developers. [...] Like any overused and abused word, vaporware has lost its meaning.
A product missing its announced release date, and the labeling of it as vaporware by the press, can be caused by its development taking longer than planned. Most software products are not released on time, according to researchers in 2001 who studied the causes and effects of vaporware;[12]"I hate to say yes, but yes", a Microsoft product manager stated in 1984, adding that "the problem isn't just at Microsoft". The phenomenon is so common thatLotus' release of1-2-3on time in January 1983, three months after announcing it, amazed many.[3]
Software developmentis a complex process, and developers are often uncertain how long it will take to complete any given project.[12][19]Fixing errors in software, for example, can make up a significant portion of its development time, and developers are motivated not to release software with errors because it could damage their reputation with customers. Last-minute design changes are also common.[12]Large organizations seem to have more late projects than smaller ones, and may benefit from hiring individual programmers on contract to write software rather than using in-house development teams. Adding people to a late software project does not help; according toBrooks' Law, doing so increases the delay.[3]
Not all delays in software are the developers' fault. In 1986, theAmerican National Standards InstituteadoptedSQLas the standard database manipulation language. Software companyAshton-Tatewas ready to releasedBase IV, but pushed the release date back to add support for SQL. The company believed that the product would not be competitive without it.[14]As the word became more commonly used by writers in the mid-1980s,InfoWorldmagazine editor James Fawcette wrote that its negative connotations were unfair to developers because of these types of circumstances.[20]
Vaporware also includes announced products that are never released because of financial problems, or because the industry changes during its development.[14]When3D Realmsfirst announcedDuke Nukem Foreverin 1997, the video game was early in its development.[21]The company's previous game released in 1996,Duke Nukem 3D, was a critical and financial success, and customer anticipation for its sequel was high. As personal computer hardware speeds improved at a rapid pace in the late 1990s, it created an "arms race" between companies in the video game industry, according toWired News. 3D Realms repeatedly moved the release date back over the next 12 years to add new, more advanced features. By the time 3D Realms went out of business in 2009 with the game still unreleased,Duke Nukem Foreverhad become synonymous with the word "vaporware" among industry writers.[22][23]The game was revived and released in 2011. However, due to a 13-year period of fan anticipation and design changes in the industry, the game received a mostly negative reception from critics and fans.
A company notorious for vaporware can improve its reputation. In the 1980s, video game makerWestwood Studioswas known for shipping products late. However, by 1993, it had so improved thatComputer Gaming Worldreported "many publishers would assure [us] that a project was going to be completed on timebecauseWestwood was doing it".[24]
Announcing products early—months or years before their release date,[25]also called "preannouncing",[26]has been an effective way by some developers to make their products successful. It can be seen as a legitimate part of their marketing strategy, but is generally not popular with industry press.[27]The first company to release a product in a given market often gains an advantage. It can set the standard for similar future products, attract a large number of customers, and establish its brand before competitor's products are released.[14]Public relations firm Coakley-Heagerty used an early announcement in 1984 to build interest among potential customers. Its client wasNolan Bushnell, formerly ofAtari Inc.who wanted to promote the newSente Technologies, but his contract with Atari prohibited doing so until a later date. The firm created an advertising campaign—including brochures and a shopping-mall appearance—around a large ambiguous box covered in brown paper to increase curiosity until Sente could be announced.[3]
Early announcements send signals not only to customers and the media, but also to providers of support products,regulatory agencies, financial analysts, investors, and other parties.[27]For example, an early announcement can relay information to vendors, letting them know to prepare marketing and shelf space. It can signal third-party developers to begin work on their own products, and it can be used to persuade a company's investors that they are actively developing new, profitable ideas.[26]Microsoft described this in 1995, duringUnited States v. Microsoft, as "not in fact vaporware, but pre-disclosure" if not done with "a desire to mislead".[6]WhenIBMannounced its Professional Workstation computer in 1986, they noted the lack of third-party programs written for it at the time, signaling those developers to start preparing. Microsoft usually announces information about its operating systems early because third-party developers are dependent on that information to develop their own products.[26]Alsop proposed in 1995 that instead of early public announcements, companies should, usingnondisclosure agreements, privately notify important customers.[6]
A developer can strategically announce a product that is in the early stages of development, or before development begins, to gain competitive advantage over other developers.[28]In addition to the "vaporware" label, this is also called "ambush marketing", and "fear, uncertainty and doubt" (FUD) by the press.[26]If the announcing developer is a large company, this may be done to influence smaller companies to stop development of similar products. The smaller company might decide their product will not be able to compete, and that it is not worth the development costs.[28]It can also be done in response to a competitor's already released product. The goal is to make potential customers believe a second, better product will be released soon. The customer might reconsider buying from the competitor, and wait.[29]In 1994, as customer anticipation increased for Microsoft's new version of Windows (codenamed "Chicago"),Appleannounced a set of upgrades to its ownSystem 7operating system that were not due to be released until nearly two years later.The Wall Street Journalwrote that Apple did this to "blunt Chicago's momentum".[30]
A premature announcement can cause others to respond with their own. WhenVisiCorpannouncedVisi Onin November 1982, it promised to ship the product by spring 1983. The news forcedQuarterdeck Office Systemsto announce in April 1983 that itsDESQwould ship in November 1983. Microsoft responded by announcingWindows 1.0in fall 1983, and Ovation Technologies followed by announcing Ovation in November.InfoWorldnoted in May 1984 that of the four products only Visi On had shipped, albeit more than a year late and with only two supported applications.[3]
my own estimate is that at the time of announcement, 10% of software products don't actually exist [...] Vendors that are unwilling to [prove it exists] shouldn't announce their packages to the press
Industry publications widely accused companies of using early announcements intentionally to gain competitive advantage over others. In his 1989Network Worldarticle,Joe Mohenwrote the practice had become a "vaporware epidemic", and blamed the press for not investigating claims by developers. "If the pharmaceutical industry were this careless, I could announce a cure for cancer today – to a believing press."[31]In 1985 Stewart Alsop began publishing his influential monthlyVaporlist, a list of companies he felt announced their products too early, hoping to dissuade them from the practice;[6]among the entries in January 1988 were aVerbatim Corp.optical drivethat was 30 months late,WordPerfectfor Macintosh (12 months), IBMOS/2 1.1(nine months), and Lotus 1-2-3 for OS/2 and Macintosh (nine and three months late, respectively).[32]WiredMagazine began publishing a similar list in 1997. Seven major software developers—including Ashton-Tate,Hewlett-Packard, andSybase—formed a council in 1990, and issued a report condemning the "vacuous product announcement dubbed vaporware and other misrepresentations of product availability" because they felt it had hurt the industry's credibility.[33]
In the United States, announcing a product that does not exist to gain a competitive advantage is illegal via Section 2 of theSherman Antitrust Actof 1890, but few hardware or software developers have been found guilty of it. The section requires proof that the announcement is both provably false, and has actual or likely market impact.[34]False or misleading announcements designed to influence stock prices are illegal under United Statessecurities fraudlaws.[35]The complex and changing nature of the computer industry, marketing techniques, and lack of precedent for applying these laws to the industry can mean developers are not aware their actions are illegal. TheU.S. Securities and Exchange Commissionissued a statement in 1984 with the goal of reminding companies that securities fraud also applies to "statements that can reasonably be expected to reach investors and the trading markets".[36]
Several companies have been accused in court of using knowingly false announcements to gain market advantage. In 1969, the United States Justice Department accused IBM of doing this in the caseUnited States v. IBM. After IBM's competitor,Control Data Corporation(CDC), released a computer, IBM announced theSystem/360 Model 91. The announcement resulted in a significant reduction in sales of CDC's product. The Justice Department accused IBM of doing this intentionally because the System/360 Model 91 was not released until two years later.[37][38]IBM avoided preannouncing products during the antitrust case, but after the case ended it resumed the practice. The company likely announced itsPCjrin November 1983—four months before general availability in March 1984—to hurt sales of rival home computers during theimportant Christmas sales season.[39][40]In 1985The New York Timeswrote[41]
Because of its position in the industry, an announcement of a future I.B.M. product, or even a rumor of one, is enough to slow competitors' sales. Some critics say that I.B.M. is trying to lock out competitors when it issues statements outlining the general trend of future products. I.B.M. insists the practice is necessary to help customer planning.
The practice was not called "vaporware" at the time, but publications have since used the word to refer specifically to it. Similar cases have been filed againstKodak,AT&T, andXerox.[42]
US District JudgeStanley Sporkinwas a vocal opponent of the practice during his review of the settlement resulting fromUnited States v. Microsoft Corp.in 1994. "Vaporware is a practice that is deceitful on its face and everybody in the business community knows it," said Sporkin.[43]One of the accusations made during the trial was that Microsoft has illegally used early announcements. The review began when three anonymous companies protested the settlement, claiming the government did not thoroughly investigate Microsoft's use of the practice. Specifically, they claimedMicrosoftannounced its Quick Basic 3 program to slow sales of its competitorBorland's recently released Turbo Basic program.[42][6]The review was dismissed for lack of explicit proof.[42]
|
https://en.wikipedia.org/wiki/Vaporware
|
SemEval(SemanticEvaluation) is an ongoing series of evaluations ofcomputational semantic analysissystems; it evolved from theSensevalword senseevaluation series. The evaluations are intended to explore the nature ofmeaningin language. While meaning is intuitive to humans, transferring those intuitions to computational analysis has proved elusive.
This series of evaluations is providing a mechanism to characterize in more precise terms exactly what is necessary to compute in meaning. As such, the evaluations provide an emergent mechanism to identify the problems and solutions for computations with meaning. These exercises have evolved to articulate more of the dimensions that are involved in our use of language. They began with apparently simple attempts to identifyword sensescomputationally. They have evolved to investigate the interrelationships among the elements in a sentence (e.g.,semantic role labeling), relations between sentences (e.g.,coreference), and the nature of what we are saying (semantic relationsandsentiment analysis).
The purpose of the SemEval and Senseval exercises is to evaluate semantic analysis systems. "Semantic Analysis" refers to a formal analysis of meaning, and "computational" refer to approaches that in principle support effective implementation.[1]
The first three evaluations, Senseval-1 through Senseval-3, were focused onword sense disambiguation(WSD), each time growing in the number of languages offered in the tasks and in the number of participating teams. Beginning with the fourth workshop, SemEval-2007 (SemEval-1), the nature of the tasks evolved to includesemantic analysistasks outside of word sense disambiguation.[2]
Triggered by the conception of the*SEM conference, the SemEval community had decided to hold the evaluation workshops yearly in association with the *SEM conference. It was also the decision that not every evaluation task will be run every year, e.g. none of the WSD tasks were included in the SemEval-2012 workshop.
From the earliest days, assessing the quality of word sense disambiguation algorithms had been primarily a matter ofintrinsic evaluation, and “almost no attempts had been made to evaluate embedded WSD components”.[3]Only very recently(2006)had extrinsic evaluations begun to provide some evidence for the value of WSD in end-user applications.[4]Until 1990 or so, discussions of the sense disambiguation task focused mainly on illustrative examples rather than comprehensive evaluation. The early 1990s saw the beginnings of more systematic and rigorous intrinsic evaluations, including more formal experimentation on small sets of ambiguous words.[5]
In April 1997, Martha Palmer and Marc Light organized a workshop entitledTagging with Lexical Semantics: Why, What, and How?in conjunction with the Conference on Applied Natural Language Processing.[6]At the time, there was a clear recognition that manually annotatedcorporahad revolutionized other areas of NLP, such aspart-of-speech taggingandparsing, and that corpus-driven approaches had the potential to revolutionize automatic semantic analysis as well.[7]Kilgarriff recalled that there was "a high degree of consensus that the field needed evaluation", and several practical proposals by Resnik and Yarowsky kicked off a discussion that led to the creation of the Senseval evaluation exercises.[8][9][10]
After SemEval-2010, many participants feel that the 3-year cycle is a long wait. Many other shared tasks such asConference on Natural Language Learning(CoNLL) andRecognizing Textual Entailments(RTE) run annually. For this reason, the SemEval coordinators gave the opportunity for task organizers to choose between a 2-year or a 3-year cycle.[11]The SemEval community favored the 3-year cycle.Although the votes within the SemEval community favored a 3-year cycle, organizers and coordinators had settled to split the SemEval task into 2 evaluation workshops. This was triggered by the introduction of the new*SEM conference. The SemEval organizers thought it would be appropriate to associate our event with the *SEM conference and collocate the SemEval workshop with the *SEM conference. The organizers got very positive responses (from the task coordinators/organizers and participants) about the association with the yearly *SEM, and 8 tasks were willing to switch to 2012. Thus was born SemEval-2012 and SemEval-2013. The current plan is to switch to a yearly SemEval schedule to associate it with the *SEM conference but not every task needs to run every year.[12]
The framework of the SemEval/Senseval evaluation workshops emulates theMessage Understanding Conferences(MUCs) and other evaluation workshops ran by ARPA (Advanced Research Projects Agency, renamed theDefense Advanced Research Projects Agency (DARPA)).
Stages of SemEval/Senseval evaluation workshops[14]
Senseval-1 & Senseval-2 focused on evaluation WSD systems on major languages that were available corpus and computerized dictionary. Senseval-3 looked beyond thelexemesand started to evaluate systems that looked into wider areas of semantics, such as Semantic Roles (technically known asTheta rolesin formal semantics),Logic FormTransformation (commonly semantics of phrases, clauses or sentences were represented infirst-order logic forms) and Senseval-3 explored performances of semantics analysis onMachine translation.
As the types of different computational semantic systems grew beyond the coverage of WSD, Senseval evolved into SemEval, where more aspects of computational semantic systems were evaluated.
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest fall short of the logical rigor that is found in formal computational semantics, attempting to identify and characterize the kinds of issues relevant to human understanding of language. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form.
The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources.
The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing,
and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue.
For example, in theword sense inductionanddisambiguationtask, there are three separate phases:
The unsupervised evaluation for WSI considered two types of evaluationV Measure(Rosenberg and Hirschberg, 2007), andpaired F-Score(Artiles et al., 2009). This evaluation follows the supervised evaluation of SemEval-2007WSItask (Agirre and Soroa, 2007)
The tables below reflects the workshop growth from Senseval to SemEval and gives an overview of which area of computational semantics was evaluated throughout the Senseval/SemEval workshops.
The Multilingual WSD task was introduced for the SemEval-2013 workshop.[17]The task is aimed at evaluating Word Sense Disambiguation systems in a multilingual scenario using BabelNet as its sense inventory. Unlike similar task like crosslingual WSD or themultilingual lexical substitutiontask, where no fixed sense inventory is specified, Multilingual WSD uses theBabelNetas its sense inventory. Prior to the development of BabelNet, a bilinguallexical sampleWSD evaluation task was carried out in SemEval-2007 on Chinese-English bitexts.[18]
The Cross-lingual WSD task was introduced in the SemEval-2007 evaluation workshop and re-proposed in the SemEval-2013 workshop
.[19]To facilitate the ease of integrating WSD systems into otherNatural Language Processing(NLP) applications, such as Machine Translation and multilingualInformation Retrieval, the cross-lingual WSD evaluation task was introduced a language-independent and knowledge-lean approach to WSD. The task is an unsupervised Word Sense Disambiguation task for English nouns by means of parallel corpora. It follows the lexical-sample variant of the Classic WSD task, restricted to only 20 polysemous nouns.
It is worth noting that the SemEval-2014 have only two tasks that were multilingual/crosslingual, i.e. (i) theL2 Writing Assistanttask, which is a crosslingual WSD task that includes English, Spanish, German, French and Dutch and (ii) theMultilingual Semantic Textual Similaritytask that evaluates systems on English and Spanish texts.
The major tasks in semantic evaluation include the following areas ofnatural language processing. This list is expected to grow as the field progresses.[20]
The following table shows the areas of studies that were involved in Senseval-1 through SemEval-2014 (S refers to Senseval and SE refers to SemEval, e.g. S1 refers to Senseval-1 and SE07 refers to SemEval2007):
SemEval tasks have created many types of semantic annotations, each type with various schema. In SemEval-2015, the organizers have decided to group tasks together into several tracks. These tracks are by the type of semantic annotations that the task hope to achieve.[21]Here lists the type of semantic annotations involved in the SemEval workshops:
A task and its track allocation is flexible; a task might develop into its own track, e.g. the taxonomy evaluation task in SemEval-2015 was under theLearning Semantic Relationstrack and in SemEval-2016, there is a dedicated track forSemantic Taxonomywith a newSemantic Taxonomy Enrichmenttask.[22][23]
|
https://en.wikipedia.org/wiki/SemEval
|
Instatistics,projection pursuit regression (PPR)is astatistical modeldeveloped byJerome H. FriedmanandWerner Stuetzlethat extendsadditive models. This model adapts the additive models in that it first projects thedata matrixofexplanatory variablesin the optimal direction before applying smoothing functions to these explanatory variables.
The model consists oflinear combinationsofridge functions: non-linear transformations of linear combinations of the explanatory variables. The basic model takes the form
wherexiis a 1 ×prow of thedesign matrixcontaining the explanatory variables for examplei,yiis a 1 × 1 prediction, {βj} is a collection ofrvectors (each a unit vector of lengthp) which contain the unknown parameters, {fj} is a collection ofrinitially unknown smooth functions that map fromR→R{\displaystyle \mathbb {R} \rightarrow \mathbb {R} }, andris a hyperparameter. Good values forrcan be determined throughcross-validationor a forward stage-wise strategy which stops when the model fit cannot be significantly improved. Asrapproaches infinity and with an appropriate set of functions {fj}, the PPR model is auniversal estimator, as it can approximate any continuous function inRp{\displaystyle \mathbb {R} ^{p}}.
For a given set of data{(yi,xi)}i=1n{\displaystyle \{(y_{i},x_{i})\}_{i=1}^{n}}, the goal is to minimize the error function
over the functionsfj{\displaystyle f_{j}}and vectorsβj{\displaystyle \beta _{j}}. No method exists for solving over all variables at once, but it can be solved viaalternating optimization. First, consider each(fj,βj){\displaystyle (f_{j},\beta _{j})}pair individually: Let all other parameters be fixed, and find a "residual", the variance of the output not accounted for by those other parameters, given by
The task of minimizing the error function now reduces to solving
for eachjin turn. Typically new(fj,βj){\displaystyle (f_{j},\beta _{j})}pairs are added to the model in a forward stage-wise fashion.
Aside: Previously fitted pairs can be readjusted after new fit-pairs are determined by an algorithm known asbackfitting, which entails reconsidering a previous pair, recalculating the residual given how other pairs have changed, refitting to account for that new information, and then cycling through all fit-pairs this way until parameters converge. This process typically results in a model that performs better with fewer fit-pairs, though it takes longer to train, and it is usually possible to achieve the same performance by skipping backfitting and simply adding more fits to the model (increasingr).
Solving the simplified error function to determine an(fj,βj){\displaystyle (f_{j},\beta _{j})}pair can be done with alternating optimization, where first a randomβj{\displaystyle \beta _{j}}is used to projectX{\displaystyle X}in to 1D space, and then the optimalfj{\displaystyle f_{j}}is found to describe the relationship between that projection and the residuals via your favorite scatter plot regression method. Then iffj{\displaystyle f_{j}}is held constant, assumingfj{\displaystyle f_{j}}is once differentiable, the optimal updated weightsβj{\displaystyle \beta _{j}}can be found via theGauss–Newton method—a quasi-Newton method in which the part of the Hessian involving the second derivative is discarded. To derive this, firstTaylor expandfj(βjTxi)≈fj(βj,oldTxi)+fj˙(βj,oldTxi)(βjTxi−βj,oldTxi){\displaystyle f_{j}(\beta _{j}^{T}x_{i})\approx f_{j}(\beta _{j,old}^{T}x_{i})+{\dot {f_{j}}}(\beta _{j,old}^{T}x_{i})(\beta _{j}^{T}x_{i}-\beta _{j,old}^{T}x_{i})}, then plug the expansion back in to the simplified error functionS′{\displaystyle S'}and do some algebraic manipulation to put it in the form
This is aweighted least squaresproblem. If we solve for all weightsw{\displaystyle w}and put them in a diagonal matrixW{\displaystyle W}, stack all the new targetsb^{\displaystyle {\hat {b}}}in to a vector, and use the full data matrixX{\displaystyle X}instead of a single examplexi{\displaystyle x_{i}}, then the optimalβj{\displaystyle \beta _{j}}is given by the closed-form
Use this updatedβj{\displaystyle \beta _{j}}to find a new projection ofX{\displaystyle X}and refitfj{\displaystyle f_{j}}to the new scatter plot. Then use that newfj{\displaystyle f_{j}}to updateβj{\displaystyle \beta _{j}}by resolving the above, and continue this alternating process until(fj,βj){\displaystyle (f_{j},\beta _{j})}converges.
It has been shown that the convergence rate, the bias and the variance are affected by the estimation ofβj{\displaystyle \beta _{j}}andfj{\displaystyle f_{j}}.
The PPR model takes the form of a basic additive model but with the additionalβj{\displaystyle \beta _{j}}component, so eachfj{\displaystyle f_{j}}fits a scatter plot ofβjTXT{\displaystyle \beta _{j}^{T}X^{T}}vs theresidual(unexplained variance) during training rather than using the raw inputs themselves. This constrains the problem of finding eachfj{\displaystyle f_{j}}to low dimension, making it solvable with common least squares or spline fitting methods and sidestepping thecurse of dimensionalityduring training. Becausefj{\displaystyle f_{j}}is taken of a projection ofX{\displaystyle X}, the result looks like a "ridge" orthogonal to the projection dimension, so{fj}{\displaystyle \{f_{j}\}}are often called "ridge functions". The directionsβj{\displaystyle \beta _{j}}are chosen to optimize the fit of their corresponding ridge functions.
Note that because PPR attempts to fit projections of the data, it can be difficult to interpret the fitted model as a whole, because each input variable has been accounted for in a complex and multifaceted way. This can make the model more useful for prediction than for understanding the data, though visualizing individual ridge functions and considering which projections the model is discovering can yield some insight.
Both projection pursuit regression and fully connectedneural networkswith a single hidden layer project the input vector onto a one-dimensional hyperplane and then apply a nonlinear transformation of the input variables that are then added in a linear fashion. Thus both follow the same steps to overcome the curse of dimensionality. The main difference is that the functionsfj{\displaystyle f_{j}}being fitted in PPR can be different for each combination of input variables and are estimated one at a time and then updated with the weights, whereas in NN these are all specified upfront and estimated simultaneously.
Thus, in PPR estimation the transformations of variables in PPR are data driven whereas in a single-layer neural network these transformations are fixed.
|
https://en.wikipedia.org/wiki/Projection_pursuit_regression
|
Classified informationis confidential material that a government deems to besensitive informationwhich must be protected from unauthorized disclosure that requires special handling and dissemination controls. Access is restricted bylawor regulation to particular groups of individuals with the necessarysecurity clearancewith aneed to know.
A formalsecurity clearanceis required to view or handle classified material. The clearance process requires a satisfactory background investigation. Documents and other information must be properly marked "by the author" with one of several (hierarchical) levels of sensitivity—e.g. Confidential (C), Secret (S), and Top Secret (S). All classified documents require designation markings on the technical file which is usually located either on the cover sheet, header and footer of page. The choice of level is based on an impact assessment; governments have their own criteria, including how to determine the classification of an information asset and rules on how to protect information classified at each level. This process often includes security clearances for personnel handling the information. Mishandling of the material can incur criminal penalties.
Somecorporationsand non-government organizations also assign levels of protection to their private information, either from a desire to protecttrade secrets, or because of laws and regulations governing various matters such aspersonal privacy, sealed legal proceedings and the timing of financial information releases.
With the passage of time much classified information can become less sensitive, and may be declassified and made public. Since the late twentieth century there has beenfreedom of information legislationin some countries, whereby the public is deemed to have the right to all information that is not considered to be damaging if released. Sometimes documents are released with information still considered confidential obscured (redacted), as in the adjacent example.
The question exists among some political science and legal experts whether the definition of classified ought to be information that would cause injury to the cause of justice, human rights, etc., rather than information that would cause injury to the national interest; to distinguish when classifying information is in the collective best interest of a just society, or merely the best interest of a society acting unjustly to protect its people, government, or administrative officials from legitimate recourses consistent with a fair and justsocial contract.
The purpose of classification is to protect information. Higher classifications protect information that might endangernational security. Classification formalises what constitutes a "state secret" and accords different levels of protection based on the expected damage the information might cause in the wrong hands.
However, classified information is frequently "leaked" to reporters by officials for political purposes. Several U.S. presidents have leaked sensitive information to influence public opinion.[2][3]
Former government intelligence officials are usually able to retain their security clearance, but it is a privilege not a right, with the President being the grantor.[4]
Although the classification systems vary from country to country, most have levels corresponding to the following British definitions (from the highest level to lowest).
Top Secretis the highest level of classified information.[5]Information is further compartmented so that specific access using a code word aftertop secretis a legal way to hide collective and important information.[6]Such material would cause "exceptionally grave damage" tonational securityif made publicly available.[7]Prior to 1942, the United Kingdom and other members of the British Empire usedMost Secret, but this was later changed to match the United States' category name ofTop Secretin order to simplify Allied interoperability. The unauthorized disclosure of Top Secret (TS) information is expected to cause harm and be of grave threat to national security.
The Washington Postreported in an investigation entitled "Top Secret America" that, as of 2010, "An estimated 854,000 people ... hold top-secret security clearances" in the United States.[8]
It is desired that no document be released which refers toexperiments with humansand might have adverse effect on public opinion or result in legal suits. Documents covering such work field should be classified "secret".
Secretmaterial would cause "serious damage" to national security if it were publicly available.[11]
In the United States, operational "Secret" information can be marked with an additional "LimDis", to limit distribution.
Confidentialmaterial would cause "damage" or be prejudicial to national security if publicly available.[12]
Restrictedmaterial would cause "undesirable effects" if publicly available. Some countries do not have such a classification in public sectors, such as commercial industries. Such a level is also known as "PrivateInformation".
Official(equivalent to U.S. DOD classificationControlled Unclassified Informationor CUI) material forms the generality of government business, public service delivery and commercial activity. This includes a diverse range of information, of varying sensitivities, and with differing consequences resulting from compromise or loss. Official information must be secured against athreat modelthat is broadly similar to that faced by a large private company.
The Official Sensitive classification replaced the Restricted classification in April 2014 in the UK; Official indicates the previously used Unclassified marking.[13]
Unclassifiedis technically not a classification level. Though this is a feature of some classification schemes, used for government documents that do not merit a particular classification or which have been declassified. This is because the information is low-impact, and therefore does not require any special protection, such as vetting of personnel.
A plethora of pseudo-classifications exist under this category.[citation needed]
Clearanceis a general classification, that comprises a variety of rules controlling the level of permission required to view some classified information, and how it must be stored, transmitted, and destroyed. Additionally, access is restricted on a "need to know" basis. Simply possessing a clearance does not automatically authorize the individual to view all material classified at that level or below that level. The individual must present a legitimate "need to know" in addition to the proper level of clearance.
In addition to the general risk-based classification levels, additionalcompartmented constraints on accessexist, such as (in the U.S.) Special Intelligence (SI), which protects intelligence sources and methods, No Foreign dissemination (NoForn), which restricts dissemination to U.S. nationals, and Originator Controlled dissemination (OrCon), which ensures that the originator can track possessors of the information. Information in these compartments is usually marked with specific keywords in addition to the classification level.
Government information aboutnuclear weaponsoften has an additional marking to show it contains such information (CNWDI).
When a government agency or group shares information between an agency or group of other country's government they will generally employ a special classification scheme that both parties have previously agreed to honour.
For example, the marking Atomal, is applied to U.S. Restricted Data or Formerly Restricted Data and United Kingdom Atomic information that has been released to NATO. Atomal information is marked COSMIC Top Secret Atomal (CTSA), NATO Secret Atomal (NSAT), or NATO Confidential Atomal (NCA). BALK and BOHEMIA are also used.
For example, sensitive information shared amongstNATOallies has four levels of security classification; from most to least classified:[14][15]
A special case exists with regard to NATO Unclassified (NU) information. Documents with this marking are NATO property (copyright) and must not be made public without NATO permission.
COSMIC is an acronym for "Control of Secret Material in an International Command".[17]
Most countries employ some sort of classification system for certain government information. For example, inCanada, information that the U.S. would classify SBU (Sensitive but Unclassified) is called "protected" and further subcategorised into levels A, B, and C.
On 19 July 2011, the National Security (NS) classification marking scheme and the Non-National Security (NNS) classification marking scheme inAustraliawas unified into one structure.
As of 2018, the policy detailing howAustralian governmententities handle classified information is defined in the Protective Security Policy Framework (PSPF). The PSPF is published by theAttorney-General's Departmentand covers security governance,information security, personal security, andphysical security. A security classification can be applied to the information itself or an asset that holds information e.g., aUSBorlaptop.[23]
The Australian Government uses four security classifications: OFFICIAL: Sensitive, PROTECTED, SECRET and TOP SECRET. The relevant security classification is based on the likely damage resulting from compromise of the information's confidentiality.
All other information from business operations and services requires a routine level of protection and is treated as OFFICIAL. Information that does not form part of official duty is treated as UNOFFICIAL.
OFFICIAL and UNOFFICIAL are not security classifications and are not mandatory markings.
Caveats are a warning that the information has special protections in addition to those indicated by the security classification of PROTECTED or higher (or in the case of the NATIONAL CABINET caveat, OFFICIAL: Sensitive or higher). Australia has four caveats:
Codewords are primarily used within the national security community. Each codeword identifies a special need-to-knowcompartment.
Foreign government markings are applied to information created by Australian agencies from foreign source information. Foreign government marking caveats require protection at least equivalent to that required by the foreign government providing the source information.
Special handling instructions are used to indicate particular precautions for information handling. They include:
A releasability caveat restricts information based oncitizenship. The three in use are:
Additionally, the PSPF outlines Information Management Markers (IMM) as a way for entities to identify information that is subject to non-security related restrictions on access and use. These are:
There are three levels ofdocument classificationunder Brazilian Law No. 12.527, theAccess to Information Act:[24]ultrassecreto(top secret),secreto(secret) andreservado(restricted).
A top secret (ultrassecreto) government-issued document may be classified for a period of 25 years, which may be extended up to another 25 years.[25]Thus, no document remains classified for more than 50 years. This is mandated by the 2011 Information Access Law (Lei de Acesso à Informação), a change from the previous rule, under which documents could have their classification time length renewed indefinitely, effectively shuttering state secrets from the public. The 2011 law applies retroactively to existing documents.
The government of Canada employs two main types of sensitive information designation: Classified and Protected. The access and protection of both types of information is governed by theSecurity of Information Act, effective 24 December 2001, replacing theOfficial Secrets Act 1981.[26]To access the information, a person must have the appropriate security clearance and the need to know.
In addition, the caveat "Canadian Eyes Only" is used to restrict access to Classified or Protected information only to Canadian citizens with the appropriate security clearance and need to know.[27]
SOI is not a classification of dataper se. It is defined under theSecurity of Information Act, and unauthorised release of such information constitutes a higher breach of trust, with a penalty of up to life imprisonment if the information is shared with a foreign entity or terrorist group.
SOIs include:
In February 2025, the Department of National Defence announced a new category of Persons Permanently Bound to Security (PPBS). The protection would apply to some units, sections or elements, and select positions (both current and former), with access to sensitive Special Operational Information (SOI) for national defense and intelligence work. If a unit or organization routinely handles SOI, all members of that unit will be automatically bound to secrecy. If an individual has direct access to SOI, deemed to be integral to national security, that person may be recommended for PPBS designation. The designation is for life, punishable by imprisonment.[28]
Classified information can be designatedTop Secret,SecretorConfidential. These classifications are only used on matters of national interest.
Protected information is not classified. It pertains to any sensitive information that does not relate to national security and cannot be disclosed under the access and privacy legislation because of the potential injury to particular public or private interests.[29][30]
Federal Cabinet (King's Privy Council for Canada) papers are either protected (e.g., overhead slides prepared to make presentations to Cabinet) or classified (e.g., draft legislation, certain memos).[31]
TheCriminal Lawof thePeople's Republic of China(which is not operative in the special administrative regions ofHong KongandMacau) makes it a crime to release a state secret. Regulation and enforcement is carried out by theNational Administration for the Protection of State Secrets.
Under the 1989 "Law on Guarding State Secrets",[32]state secrets are defined as those that concern:
Secrets can be classified into three categories:
In France, classified information is defined by article 413-9 of the Penal Code.[34]The three levels of military classification are
Less sensitive information is "protected". The levels are
A further caveat,spécial France(reserved France) restricts the document to French citizens (in its entirety or by extracts). This is not a classification level.
Declassification of documents can be done by theCommission consultative du secret de la défense nationale(CCSDN), an independent authority. Transfer of classified information is done with double envelopes, the outer layer being plastified and numbered, and the inner in strong paper. Reception of the document involves examination of the physical integrity of the container and registration of the document. In foreign countries, the document must be transferred through specialised military mail ordiplomatic bag. Transport is done by an authorised conveyor or habilitated person for mail under 20 kg. The letter must bear a seal mentioning "Par Valise Accompagnee-Sacoche". Once a year, ministers have an inventory of classified information and supports by competent authorities.
Once their usage period is expired, documents are transferred to archives, where they are either destroyed (by incineration, crushing, or overvoltage), or stored.
In case of unauthorized release of classified information, competent authorities are theMinistry of Interior, the 'Haut fonctionnaire de défense et de sécurité("high civil servant for defence and security") of the relevant ministry, and the General secretary for National Defence. Violation of such secrets is an offence punishable with seven years of imprisonment and a 100,000-euro fine; if the offence is committed by imprudence or negligence, the penalties are three years of imprisonment and a 45,000-euro fine.
TheSecurity Bureauis responsible for developing policies in regards to the protection and handling of confidential government information. In general, the system used in Hong Kong is very similar to the UK system, developed from thecolonial era of Hong Kong.
Four classifications exists in Hong Kong, from highest to lowest in sensitivity:[35]
Restricted documents are not classifiedper se, but only those who have a need to know will have access to such information, in accordance with thePersonal Data (Privacy) Ordinance.[36]
New Zealanduses the Restricted classification, which is lower than Confidential. People may be given access to Restricted information on the strength of an authorisation by their Head of department, without being subjected to the backgroundvettingassociated with Confidential, Secret and Top Secret clearances. New Zealand's security classifications and the national-harm requirements associated with their use are roughly similar to those of the United States.
In addition to national security classifications there are two additional security classifications, In Confidence and Sensitive, which are used to protect information of a policy and privacy nature. There are also a number of information markings used within ministries and departments of the government, to indicate, for example, that information should not be released outside the originating ministry.
Because of strict privacy requirements around personal information, personnel files are controlled in all parts of the public and private sectors. Information relating to the security vetting of an individual is usually classified at the In Confidence level.
InRomania, classified information is referred to as "state secrets" (secrete de stat) and is defined by the Penal Code as "documents and data that manifestly appear to have this status or have been declared or qualified as such by decision of Government".[37]There are three levels of classification: "Secret" (Secret/S), "Top Secret" (Strict Secret/SS), and "Top Secret of Particular Importance" (Strict secret de interes deosebit/SSID).[38]The levels are set by theRomanian Intelligence Serviceand must be aligned with NATO regulations—in case of conflicting regulations, the latter are applied with priority. Dissemination of classified information to foreign agents or powers is punishable by up to life imprisonment, if such dissemination threatens Romania's national security.[39]
In theRussian Federation, a state secret (Государственная тайна) is information protected by the state on its military, foreign policy, economic, intelligence, counterintelligence, operational and investigative and other activities, dissemination of which could harm state security.
The Swedish classification has been updated due to increased NATO/PfP cooperation. All classified defence documents will now have both a Swedish classification (Kvalificerat hemlig,Hemlig,KonfidentiellorBegränsat Hemlig), and an English classification (Top Secret, Secret, Confidential, or Restricted).[citation needed]The termskyddad identitet, "protected identity", is used in the case of protection of a threatened person, basically implying "secret identity", accessible only to certain members of the police force and explicitly authorised officials.
At the federal level, classified information in Switzerland is assigned one of three levels, which are from lowest to highest: Internal, Confidential, Secret.[40]Respectively, these are, in German,Intern,Vertraulich,Geheim; in French,Interne,Confidentiel,Secret; in Italian,Ad Uso Interno,Confidenziale,Segreto. As in other countries, the choice of classification depends on the potential impact that the unauthorised release of the classified document would have on Switzerland, the federal authorities or the authorities of a foreign government.
According to the Ordinance on the Protection of Federal Information, information is classified as Internal if its "disclosure to unauthorised persons may be disadvantageous to national interests."[40]Information classified as Confidential could, if disclosed, compromise "the free formation of opinions and decision-making ofthe Federal Assemblyorthe Federal Council," jeopardise national monetary/economic policy, put the population at risk or adversely affect the operations of theSwiss Armed Forces. Finally, the unauthorised release of Secret information could seriously compromise the ability of either the Federal Assembly or the Federal Council to function or impede the ability of the Federal Government or the Armed Forces to act.
According to the related regulations inTurkey, there are four levels of document classification:[41]çok gizli(top secret),gizli(secret),özel(confidential) andhizmete özel(restricted). The fifth istasnif dışı, which means unclassified.
Until 2013, theUnited Kingdomused five levels of classification—from lowest to highest, they were: Protect, Restricted, Confidential, Secret and Top Secret (formerly Most Secret). TheCabinet Officeprovides guidance on how to protect information, including thesecurity clearancesrequired for personnel. Staff may be required to sign to confirm their understanding and acceptance of theOfficial Secrets Acts 1911 to 1989, although the Act applies regardless of signature. Protect is not in itself a security protective marking level (such as Restricted or greater), but is used to indicate information which should not be disclosed because, for instance, the document contains tax, national insurance, or other personal information.
Government documents without a classification may be marked as Unclassified or Not Protectively Marked.[42]
This system was replaced by theGovernment Security Classifications Policy, which has a simpler model: Top Secret, Secret, and Official from April 2014.[13]Official Sensitive is a security marking which may be followed by one of three authorised descriptors: Commercial, LocSen (location sensitive) or Personal. Secret and Top Secret may include a caveat such as UK Eyes Only.
Also useful is that scientific discoveries may be classified via theD-Noticesystem if they are deemed to have applications relevant to national security. These may later emerge when technology improves so for example the specialised processors and routing engines used in graphics cards are loosely based on top secret military chips designed for code breaking and image processing.
They may or may not have safeguards built in to generate errors when specific tasks are attempted and this is invariably independent of the card's operating system.[citation needed]
The U.S. classification system is currently established underExecutive Order 13526and has three levels of classification—Confidential, Secret, and Top Secret. The U.S. had a Restricted level duringWorld War IIbut no longer does. U.S. regulations state that information received from other countries at the Restricted level should be handled as Confidential. A variety of markings are used for material that is not classified, but whose distribution is limited administratively or by other laws, e.g.,For Official Use Only(FOUO), orsensitive but unclassified(SBU). The Atomic Energy Act of 1954 provides for the protection of information related to the design of nuclear weapons. The term "Restricted Data" is used to denote certain nuclear technology. Information about the storage, use or handling of nuclear material or weapons is marked "Formerly Restricted Data". These designations are used in addition to level markings (Confidential, Secret and Top Secret). Information protected by the Atomic Energy Act is protected by law and information classified under the Executive Order is protected by Executive privilege.
The U.S. government insists it is "not appropriate" for a court to question whether any document is legally classified.[43]In the1973 trial of Daniel Ellsberg for releasing the Pentagon Papers, the judge did not allow any testimony from Ellsberg, claiming it was "irrelevant", because the assigned classification could not be challenged. The charges against Ellsberg were ultimately dismissed after it was revealed that the government had broken the law in secretly breaking into the office of Ellsberg's psychiatrist and in tapping his telephone without a warrant. Ellsberg insists that the legal situation in the U.S. in 2014 is worse than it was in 1973, andEdward Snowdencould not get a fair trial.[44]TheState Secrets Protection Actof 2008 might have given judges the authority to review such questionsin camera, but the bill was not passed.[43]
When a government agency acquires classified information through covert means, or designates a program as classified, the agency asserts "ownership" of that information and considers any public availability of it to be a violation of their ownership—even if the same information was acquired independently through "parallel reporting" by the press or others. For example, although theCIA drone programhas been widely discussed in public since the early 2000s, and reporters personally observed and reported on drone missile strikes, the CIA still considers the very existence of the program to be classified in its entirety, and any public discussion of it technically constitutes exposure of classified information. "Parallel reporting" was an issue in determining what constitutes "classified" information during theHillary Clinton email controversywhenAssistant Secretary of State for Legislative AffairsJulia Frifieldnoted, "When policy officials obtain information from open sources, 'think tanks,' experts, foreign government officials, or others, the fact that some of the information may also have been available through intelligence channels does not mean that the information is necessarily classified."[45][46][47]
Strictly Secret and Confidential
Secret
Confidential
Reserved
US, French, EU, Japan "Confidential" marking to be handled as SECRET.[49]
Top Secret
Highly Secret
Secret
Internal
Foreign Service:Fortroligt(thin black border)
Top Secret
Secret
Confidential
For Official Use Only
Top Secret
Secret
Confidential
Limited Use
Top Secret
Secret
Confidential
Restricted Distribution
Absolute Secret
Secret
Confidential
Service Document
Class 1 Secret
Class 2 Secret
Class 3 Secret
Confidential
Philippines(Tagalog)
Matinding Lihim
Mahigpit na Lihim
Lihim
Ipinagbabawal
Strict Secret of Special Importance
Secret for Service Use
Of Special Importance (variant: Completely Secret)
Completely Secret (variant: Secret)
Secret (variant: Not To Be Disclosed (Confidential))
For Official Use
State Secret
Strictly Confidential
Confidential
Internal
Most Secret
Very Secret
Secret
Restricted
Top Secret
Secret
Confidential
Restricted
Table source:US Department of Defense(January 1995)."National Industrial Security Program - Operating Manual (DoD 5220.22-M)"(PDF). pp. B1 - B3 (PDF pages:121–123 ).Archived(PDF)from the original on 27 July 2019. Retrieved27 July2019.
Privatecorporationsoften require writtenconfidentiality agreementsand conductbackground checkson candidates for sensitive positions.[53]In the U.S., theEmployee Polygraph Protection Actprohibits private employers from requiring lie detector tests, but there are a few exceptions. Policies dictating methods for marking and safeguarding company-sensitive information (e.g. "IBM Confidential") are common and some companies have more than one level. Such information is protected undertrade secretlaws. New product development teams are often sequestered and forbidden to share information about their efforts with un-cleared fellow employees, the originalApple Macintoshproject being a famous example. Other activities, such asmergersandfinancial reportpreparation generally involve similar restrictions. However, corporate security generally lacks the elaborate hierarchical clearance and sensitivity structures and the harsh criminal sanctions that give government classification systems their particular tone.
TheTraffic Light Protocol[54][55]was developed by theGroup of Eightcountries to enable the sharing of sensitive information between government agencies and corporations. This protocol has now been accepted as a model for trusted information exchange by over 30 other countries. The protocol provides for four "information sharing levels" for the handling of sensitive information.
|
https://en.wikipedia.org/wiki/Classified_information#Canada
|
LightGBM, short forLight Gradient-Boosting Machine, is afree and open-sourcedistributedgradient-boostingframework formachine learning, originally developed byMicrosoft.[4][5]It is based ondecision treealgorithms and used forranking,classificationand other machine learning tasks. The development focus is on performance and scalability.
The LightGBM framework supports different algorithms including GBT,GBDT,GBRT,GBM,MART[6][7]andRF.[8]LightGBM has many ofXGBoost's advantages, including sparse optimization, parallel training, multiple loss functions, regularization, bagging, and early stopping. A major difference between the two lies in the construction of trees. LightGBM does not grow a tree level-wise — row by row — as most other implementations do.[9]Instead it grows trees leaf-wise. It will choose the leaf with max delta loss to grow.[10]Besides, LightGBM does not use the widely used sorted-based decision tree learning algorithm, which searches the best split point on sorted feature values,[11]asXGBoostor other implementations do. Instead, LightGBM implements a highly optimized histogram-based decision tree learning algorithm, which yields great advantages on both efficiency and memory consumption.[12]The LightGBM algorithm utilizes two novel techniques called Gradient-Based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB) which allow the algorithm to run faster while maintaining a high level of accuracy.[13]
LightGBM works onLinux,Windows, andmacOSand supportsC++,Python,[14]R, andC#.[15]The source code is licensed underMIT Licenseand available onGitHub.[16]
When usinggradient descent, one thinks about the space of possible configurations of the model as a valley, in which the lowest part of the valley is the model which most closely fits the data. In this metaphor, one walks in different directions to learn how much lower the valley becomes.
Typically, in gradient descent, one uses the whole set of data to calculate the valley's slopes. However, this commonly-used method assumes that every data point is equally informative.
By contrast, Gradient-Based One-Side Sampling (GOSS), a method first developed forgradient-boosted decision trees, does not rely on the assumption that all data are equally informative. Instead, it treats data points with smaller gradients (shallower slopes) as less informative by randomly dropping them. This is intended to filter out data which may have been influenced by noise, allowing the model to more accurately model the underlying relationships in the data.[13]
Exclusive feature bundling (EFB) is a near-lossless method to reduce the number of effective features. In a sparse feature space many features are nearly exclusive, implying they rarely take nonzero values simultaneously. One-hot encoded features are a perfect example of exclusive features. EFB bundles these features, reducing dimensionality to improve efficiency while maintaining a high level of accuracy. The bundle of exclusive features into a single feature is called an exclusive feature bundle.[13]
|
https://en.wikipedia.org/wiki/LightGBM
|
Cyc(pronounced/ˈsaɪk/SYKE) is a long-termartificial intelligence(AI) project that aims to assemble a comprehensiveontologyandknowledge basethat spans the basic concepts and rules about how the world works. Hoping to capturecommon sense knowledge, Cyc focuses onimplicit knowledge. The project began in July 1984 atMCCand was developed later by theCycorpcompany.
The name "Cyc" (from "encyclopedia") is a registered trademark owned by Cycorp.CycLhas a publicly released specification, and dozens of HL (Heuristic Level) modules were described in Lenat and Guha's textbook,[1]but the Cyc inference engine code and the full list of HL modules are Cycorp-proprietary.[2]
The project began in July 1984 byDouglas Lenatas a project of theMicroelectronics and Computer Technology Corporation(MCC), a research consortium started by two United States–based corporations "to counter a then ominous Japanese effort in AI, the so-called 'fifth-generation' project."[3]The US passed theNational Cooperative Research Actof 1984, which for the first time allowedUScompanies to "collude" on long-term research. Since January 1995, the project has been under active development by Cycorp, where Douglas Lenat was theCEO.
TheCycLrepresentation language started as an extension of RLL[4][5](the Representation Language Language, developed in 1979–1980 by Lenat and his graduate studentRussell Greinerwhile atStanford University). In 1989,[6]CycL had expanded inexpressive powertohigher-order logic(HOL).
Cyc's ontology grew to about 100,000 terms in 1994, and as of 2017, it contained about 1,500,000 terms. The Cyc knowledge base involving ontological terms was largely created by hand axiom-writing; it was at about 1 million in 1994, and as of 2017, it is at about 24.5 million.
In 2008, Cyc resources were mapped to manyWikipediaarticles.[7]Cyc is presently connected toWikidata.
Theknowledge baseis divided intomicrotheories. Unlike the knowledge base as a whole, each microtheory must be free from monotonic contradictions. Each microtheory is a first-class object in the Cyc ontology; it has a name that is a regular constant. The concept names in Cyc are CycLtermsorconstants.[6]Constants start with an optional#$and are case-sensitive. There are constants for:
For every instance of the collection#$ChordataPhylum(i.e., for everychordate), there exists a female animal (instance of#$FemaleAnimal), which is its mother (described by the predicate#$biologicalMother).[1]
Aninference engineis a computer program that tries to derive answers from a knowledge base. The Cyc inference engine performs generallogical deduction.[8]It also performsinductive reasoning,statistical machine learningandsymbolic machine learning, andabductive reasoning.
The Cyc inference engine separates theepistemologicalproblem from theheuristicproblem. For the latter, Cyc used acommunity-of-agentsarchitecture in which specialized modules, each with its own algorithm, became prioritized if they could make progress on the sub-problem.
The first version of OpenCyc was released in spring 2002 and contained only 6,000 concepts and 60,000 facts. The knowledge base was released under theApache License. Cycorp stated its intention to release OpenCyc under parallel, unrestricted licences to meet the needs of its users. TheCycLand SubL interpreter (the program that allows users to browse and edit the database as well as to draw inferences) was released free of charge, but only as a binary, withoutsource code. It was made available forLinuxandMicrosoft Windows. The open source Texai[9]project released theRDF-compatible content extracted from OpenCyc.[10]The user interface was in Java 6.
Cycorp was a participant of aworking groupfor the Semantic Web,Standard Upper OntologyWorking Group, which was active from 2001 to 2003.[11]
ASemantic Webversion of OpenCyc was available starting in 2008, but ending sometime after 2016.[12]
OpenCyc 4.0 was released in June 2012.[13]OpenCyc 4.0 contained 239,000 concepts and 2,093,000 facts; however, these are mainlytaxonomicassertions.
4.0 was the last released version, and around March 2017, OpenCyc was shutdown for the purported reason that "because such “fragmenting” led to divergence, and led to confusion amongst its users and the technical community generally thought that OpenCyc fragmentwasCyc.".[14]
In July 2006, Cycorp released theexecutableof ResearchCyc 1.0, a version of Cyc aimed at the research community, at no charge. (ResearchCyc was in beta stage of development during all of 2004; a beta version was released in February 2005.) In addition to the taxonomic information, ResearchCyc includes more semantic knowledge; it also includes a large lexicon,Englishparsing and generation tools, andJava-based interfaces for knowledge editing and querying. It contains a system forontology-based data integration.
In 2001,GlaxoSmithKlinewas funding the Cyc, though for unknown applications.[15]In 2007, theCleveland Clinichas used Cyc to develop anatural-language queryinterface of biomedical information oncardiothoracic surgeries.[16]A query is parsed into a set ofCycLfragments with open variables.[17]TheTerrorism Knowledge Basewas an application of Cyc that tried to contain knowledge about "terrorist"-related descriptions. The knowledge is stored as statements in mathematical logic. The project lasted from 2004 to 2008.[18][19]Lycosused Cyc for search term disambiguation, but stopped in 2001.[20]CycSecure was produced in 2002,[21]a network vulnerability assessment tool based on Cyc, with trials at the USSTRATCOMComputer Emergency Response Team.[22]
One Cyc application has the stated aim to help students doing math at a 6th grade level.[23]The application, called MathCraft,[24]was supposed to play the role of a fellow student who is slightly more confused than the user about the subject. As the user gives good advice, Cyc allows the avatar to make fewer mistakes.
The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history".[25]Catherine Havasi, CEO of Luminoso, says that Cyc is the predecessor project toIBM's Watson.[26]Machine-learning scientistPedro Domingosrefers to the project as a "catastrophic failure" for the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own.[27]
Gary Marcus, a cognitive scientist and the cofounder of an AI company called Geometric Intelligence, says "it represents an approach that is very different from all the deep-learning stuff that has been in the news."[28]This is consistent with Doug Lenat's position that "Sometimes theveneerof intelligence is not enough".[29]
This is a list of some of the notable people who work or have worked on Cyc either while it was a project at MCC (where Cyc was first started) or Cycorp.
|
https://en.wikipedia.org/wiki/Cyc
|
The Medical Priority Dispatch System(MPDS), sometimes referred to as theAdvanced Medical Priority Dispatch System(AMPDS) is a unified system used to dispatch appropriate aid to medical emergencies including systematized caller interrogation and pre-arrival instructions. Priority Dispatch Corporation is licensed to design and publish MPDS and its various products, with research supported by the International Academy of Emergency Medical Dispatch (IAEMD). Priority Dispatch Corporation, in conjunction with the International Academies of Emergency Dispatch, have also produced similar systems for Police (Police Priority Dispatch System, PPDS) and Fire (Fire Priority Dispatch System, FPDS)
MPDS was developed by Jeff Clawson from 1976 to 1979 when he worked as anemergency medical technicianand dispatcher prior to medical school. He designed a set of standardized protocols to triage patients via the telephone and thus improve the emergency response system. Protocols were first alphabetized by chief complaint that included key questions to ask the caller, pre-arrival instructions, and dispatch priorities. After many revisions, these simple cards have evolved into MPDS.
MPDS today still starts with the dispatcher asking the caller key questions. These questions allow the dispatchers to categorize the call by chief complaint and set a determinant level ranging fromA(minor) toE(immediately life-threatening) relating to the severity of the patient's condition. The system also uses the determinantOwhich may be a referral to another service or other situation that may not actually require an ambulance response. Another sub-category code is used to further categorize the patient.
The system is often used in the form of a software system called ProQA, which is also produced by Priority Dispatch Corp.
Each dispatch determinant is made up of three pieces of information, which builds the determinant in a number-letter-number format. The first component, a number from 1 to 36, indicates a complaint or specific protocol from the MPDS: the selection of this card is based on the initial questions asked by the emergency dispatcher. The second component, a letter A through E (including the Greek character Ω), is the response determinant indicating the potential severity of injury or illness based on information provided by the caller and the recommended type of response. The third component, a number, is the sub-determinant and provides more specific information about the patient's specific condition. For instance, a suspected cardiac or respiratory arrest where the patient is not breathing is given the MPDS code 9-E-1, whereas a superficial animal bite has the code 3-A-3. The MPDS codes allow emergency medical service providers to determine the appropriate response mode (e.g. "routine" or "lights and sirens") and resources to be assigned to the event. Some protocols also utilise a single-letter suffix which may be added to the end of the code to provide additional information, e.g. the code 6-D-1 is a patient with breathing difficulties who is not alert, 6-D-1A is a patient with breathing difficulties who is not alert and also has asthma, and 6-D-1E is a patient with breathing difficulties who is not alert and hasemphysema/COAD/COPD.
[1]
This Protocol was created to handle the influx of emergency calls during the H1N1 pandemic: it directed that Standard EMS Resources be delayed until patients could be assessed by a Flu Response Unit (FRU), a single provider that could attend a patient and determine what additional resources were required for patient care to reduce the risk of pandemic exposure to EMS Personnel. In March 2020 the protocol was revised to assist with mitigating theCOVID-19 pandemic.[2]
[3]
As well as triaging emergency calls, MPDS also provides instructions for the dispatcher to give to the caller whilst assistance is en route. These post-dispatch and pre-arrival instructions are intended both to keep the caller and the patient safe, but also, where necessary, to turn the caller into the "first first responder" by giving them potentially life-saving instructions. They include:
Whilst MPDS uses the determinants to provide a recommendation as to the type of response that may be appropriate, some countries use a different response approach. For example, in the United Kingdom, most, but not all front-line emergency ambulances have advanced life support trained crews, meaning that the ALS/BLS distinction becomes impossible to implement. Instead, each individual response code is assigned to one of several categories, as determined by the Government, with associated response targets for each.
[4]
* This may include an emergency ambulance, a rapid response car, ambulance officers, or specialist crews e.g.HART. Other basic life support responses may also be sent, e.g.Community First Responder.
** If an emergency ambulance is unlikely to reach the patient within the average response time, a rapid response car and/or Community First Responder may also be dispatched.
The exact nature of the response sent may vary slightly betweenAmbulance Trusts. Following a Category 2, 3, or 5 telephone triage, the patient may receive an ambulance response (which could be Category 1-4 depending on the outcome of the triage), may be referred to another service or provider, or treatment may be completed over the phone.
In an independent report into the emergency response to theManchester Arena bombing, an Advanced Paramedic for theNorth West Ambulance Servicestated it was "very much understood" that MPDS "vastly underemphasises the priority of traumatic calls."[5]
|
https://en.wikipedia.org/wiki/Advanced_Medical_Priority_Dispatch_System
|
Solving chessconsists of finding an optimal strategy for the game ofchess; that is, one by which one of the players (White or Black) can always force either a victory or a draw (seesolved game). It is also related to more generally solvingchess-likegames (i.e.combinatorial gamesofperfect information) such asCapablanca chessandinfinite chess. In a weaker sense,solving chessmay refer to proving which one of the three possible outcomes (White wins; Black wins; draw) is the result of two perfect players, without necessarily revealing the optimal strategy itself (seeindirect proof).[1]
No complete solution for chess in either of the two sensesis known, nor is it expected that chess will be solved in the near future (if ever). Progress to date is extremely limited; there aretablebasesof perfect endgame play with a small number of pieces (up to seven), and somechess variantshave been solved at least weakly. Calculated estimates ofgame-tree complexityand state-space complexity of chess exist which provide a bird's eye view of the computational effort that might be required to solve the game.
Endgame tablebasesare computerized databases that contain precalculated exhaustive analyses of positions with small numbers of pieces remaining on the board. Tablebases have solved chess to a limited degree, determining perfect play in a number ofendgames, including all non-trivial endgames with no more than seven pieces or pawns (including the two kings).[2]
One consequence of developing the seven-piece endgame tablebase is that many interesting theoretical chess endings have been found. The longest seven-piece example is a mate-in-549 position discovered in the Lomonosov tablebase by Guy Haworth, ignoring the50-move rule.[3][4]Such a position is beyond the ability of any human to solve, and no chess engine plays it correctly, either, without access to the tablebase, which initially (in 2014) required 140 TB of storage space and the use of a supercomputer but was later reduced down to 18.4 TB through the Syzygy tablebase. As of January 2023, the longest known forced mating sequence for the eight-piece tablebase (also ignoring the 50-move rule) was 584 moves. This was discovered in mid-2022 by Marc Bourzutschky.[5]The eight-piece tablebase is currently incomplete, though, so it is not guaranteed that this is the absolute limit for the eight-piece tablebase.
A variant first described byClaude Shannonprovides an argument about the game-theoretic value of chess: he proposes allowing the move of “pass”. In this variant, it is provable with astrategy stealing argumentthat the first player has at least a draw thus: if the first player has a winning move in the initial position, let him play it, else pass. The second player now faces the same situation owing to the mirror symmetry of the initial position: if the first player had no winning move in the first instance, the second player has none now. Therefore, the second player can at best draw, and the first player can at least draw, so a perfect game results in the first player winning or drawing.[6]
Somechess variantswhich are simpler than chess have been solved. A winning strategy for Black inMaharajah and the Sepoyscan be easily memorised. The 5×5Gardner's Minichessvariant has beenweakly solvedas a draw.[7]Althoughlosing chessis played on an 8×8 board, its forced capture rule greatly limits its complexity, and a computational analysis managed to weakly solve this variant as a win for White.[8]
The prospect of solving individual, specific, chess-like games becomes more difficult as the board-size is increased, such as in large chess variants, andinfinite chess.[9]
Information theoristClaude Shannonin 1950 outlined a theoretical procedure for playing a perfect game (i.e. solving chess):
"With chess it is possible, in principle, to play a perfect game or construct a machine to do so as follows: One considers in a given position all possible moves, then all moves for the opponent, etc., to the end of the game (in each variation). The end must occur, by the rules of the games after a finite number of moves (remembering the50 move drawing rule). Each of these variations ends in win, loss or draw. By working backward from the end one can determine whether there is a forced win, the position is a draw or is lost."
Shannon then went on to estimate that solving chess according to that procedure would require comparing some 10120(Shannon number) possible game variations, or having a "dictionary" denoting an optimal move for each of the approximately 1043possible board positions (currently known to be about 5x1044).[6][10]The number of mathematical operations required to solve chess, however, may be significantly different than the number of operations required to produce the entiregame-treeof chess. In particular, if White has a forced win, only a subset of the game-tree would require evaluation to confirm that a forced-win exists (i.e. with no refutations from Black). Furthermore, Shannon's calculation for the complexity of chess assumes an average game length of 40 moves, but there is no mathematical basis to say that a forced win by either side would have any relation to this game length. Indeed, some expertly played games (grandmaster-level play) have been as short as 16 moves. For these reasons, mathematicians and game theorists have been reluctant to categorically state that solving chess is an intractable problem.[6][11]
In 1950, Shannon calculated, based on a game tree complexity of 10120and a computer operating at one megahertz (a big stretch at that time: the UNIVAC 1 introduced in 1951 could perform ~2000 operations per second or 2 kilohertz) that could evaluate a terminal node in 1 microsecond would take 1090years to make its first move. Even allowing for technological advances, solving chess within a practical time frame would therefore seem beyond any conceivable technology.
Hans-Joachim Bremermann, a professor ofmathematicsandbiophysicsat theUniversity of California at Berkeley, further argued in a 1965 paper that the "speed, memory, and processing capacity of any possible future computer equipment are limited by specific physical barriers: thelight barrier, thequantum barrier, and thethermodynamical barrier. These limitations imply, for example, that no computer, however constructed, will ever be able to examine the entire tree of possible move sequences of the game of chess." Nonetheless, Bremermann did not foreclose the possibility that a computer would someday be able to solve chess. He wrote, "In order to have a computer play a perfect or nearly perfect game, it will be necessary either to analyze the game completely ... or to analyze the game in an approximate way and combine this with a limited amount of tree searching. ... A theoretical understanding of such heuristic programming, however, is still very much wanting."[12]
Recent scientific advances have not significantly changed these assessments. The game ofcheckerswas (weakly) solved in 2007,[13]but it has roughly the square root of the number of positions in chess.Jonathan Schaeffer, the scientist who led the effort, said a breakthrough such asquantum computingwould be needed before solving chess could even be attempted, but he does not rule out the possibility, saying that the one thing he learned from his 16-year effort of solving checkers "is to never underestimate the advances in technology".[14]
|
https://en.wikipedia.org/wiki/Solving_chess
|
This is a glossary ofSudokuterms and jargon. Sudoku with a 9×9 grid is assumed, unless otherwise noted.
ASudoku(i.e. thepuzzle) is a partially completedgrid. A grid has 9rows, 9columnsand 9boxes, each having 9cells(81 total). Boxes can also be calledblocksorregions.[1]Three horizontally adjacent blocks are aband, and three vertically adjacent blocks are astack.[2]The initially defined values arecluesorgivens. An ordinary Sudoku (i.e. a proper Sudoku) has one solution. Rows, columns and regions can be collectively referred to asgroups, of which the grid has 27. TheOne Ruleencapsulates the three prime rules, i.e. eachdigit(or number) can occur only once in each row, column, and box; and can be compactly stated as: "Each digit appears once in each group."
The classic 9×9 Sudoku format can be generalized to an
This accommodates variants by region size and shape, e.g. 6-cell rectangular regions. (N×NSudoku is square). ForprimeN,polyomino-shaped regions can be used and the requirement to use equal-sized regions, or have the regions entirely cover the grid can be relaxed.
Other variations include additional value placement constraints, alternate symbols (e.g. letters), alternate mechanism for expressing the clues, and compositions withoverlapping grids. SeeSudoku – Variantsfor details and additional variants.
Sudokus variants can also have additional constraints on the placement of digits, such as "< >" relations, sums, linked cells, etc.
The meanings of most of these terms can be extended to region shapes other than boxes (square-shaped). To simplify reading, definitions are given only in terms of boxes.
|
https://en.wikipedia.org/wiki/Glossary_of_Sudoku
|
TheACMConference on Information and Knowledge Management(CIKM, pronounced/ˈsikəm/) is an annualcomputer scienceresearch conferencededicated toinformation management(IM) andknowledge management(KM). Since the first event in 1992, the conference has evolved into one of the major forums for research ondatabase management,information retrieval, and knowledge management.[1][2]The conference is noted for itsinterdisciplinarity, as it brings together communities that otherwise often publish at separate venues. Recent editions have attracted well beyond 500 participants.[3]In addition to the main research program, the conference also features a number of workshops, tutorials, and industry presentations.[4]
For many years, the conference was held in the US. Since 2005, venues in other countries have been selected as well.
|
https://en.wikipedia.org/wiki/Conference_on_Information_and_Knowledge_Management
|
Incomputer programming, asemipredicate problemoccurs when asubroutineintended to return a useful value can fail, but the signalling of failure uses an otherwise validreturn value.[1]The problem is that the caller of the subroutine cannot tell what the result means in this case.
Thedivisionoperation yields areal number, but fails when the divisor iszero. If we were to write a function that performs division, we might choose to return 0 on this invalid input. However, if the dividend is 0, the result is 0 too. This means that there is no number we can return to uniquely signal attempted division by zero, since all real numbers are in therangeof division.
Early programmers handled potentially exceptional cases such as division using aconventionrequiring the calling routine to verify the inputs before calling the division function. This had two problems: first, it greatly encumbered all code that performed division (a very common operation); second, it violated theDon't repeat yourselfandencapsulationprinciples, the former of which suggesting eliminating duplicated code, and the latter suggesting that data-associated code be contained in one place (in this division example, the verification of input was done separately). For a computation more complicated than division, it could be difficult for the caller to recognize invalid input; in some cases, determining input validity may be as costly as performing the entire computation. The target function could also be modified and would then expect different preconditions than would the caller; such a modification would require changes in every place where the function was called.
The semipredicate problem is not universal among functions that can fail.
If therange of a functiondoes not cover the entirespacecorresponding to thedata typeof the function's return value, a value known to be impossible under normal computation can be used. For example, consider the functionindex, which takes a string and a substring, and returns theintegerindex of the substring in the main string. If the search fails, the function may be programmed to return −1 (or any other negative value), since this can never signify a successful result.
This solution has its problems, though, as it overloads the natural meaning of a function with an arbitrary convention:
Many languages allow, through one mechanism or another, a function to return multiple values. If this is available, the function can be redesigned to return a boolean value signalling success or failure, along with its primary return value. If multiple error modes are possible, the function may instead return an enumeratedreturn code(error code) along with its primary return value.
Various techniques for returning multiple values include:
Similar to an "out" argument, aglobal variablecan store what error occurred (or simply whether an error occurred).
For instance, if an error occurs, and is signalled (generally as above, by an illegal value like −1) the Unixerrnovariable is set to indicate which value occurred. Using a global has its usual drawbacks:thread safetybecomes a concern (modern operating systems use a thread-safe version of errno), and if only one error global is used, its type must be wide enough to contain all interesting information about all possible errors in the system.
Exceptionsare one widely used scheme for solving this problem. An error condition is not considered a return value of the function at all; normalcontrol flowis disrupted, and explicit handling of the error takes place automatically. They are an example ofout-of-band signalling.
InC, a common approach, when possible, is to use a data type deliberately wider than strictly needed by the function. For example, the standard functiongetchar()is defined with return typeintand returns a value in the range [0, 255] (the range ofunsigned char) on success or the valueEOF(implementation-defined, but outside the range ofunsigned char) on the end of the input or a read error.
In languages with pointers or references, one solution is to return a pointer to a value, rather than the value itself. This return pointer can then be set tonullto indicate an error. It is typically suited to functions that return a pointer anyway. This has a performance advantage over the OOP style of exception handling,[4]with the drawback that negligent programmers may not check the return value, resulting in acrashwhen the invalid pointer is used. Whether a pointer is null or not is another example of the predicate problem; null may be a flag indicating failure or the value of a pointer returned successfully. A common pattern in theUNIXenvironment is setting a separatevariableto indicate the cause of an error. An example of this is theC standard libraryfopen()function.
Indynamically typedlanguages, such asPHPandLisp, the usual approach is to returnfalse,none, ornullwhen the function call fails. This works by returning a type different from the normal return type (thus expanding the type). It is a dynamically typed equivalent to returning a null pointer.
For example, a numeric function normally returns a number (int or float), and while zero might be a valid response, false is not. Similarly, a function that normally returns a string might sometimes return the empty string as a valid response, but return false on failure. This process of type-juggling necessitates care in testing the return value: e.g., in PHP, use===(i.e., equal and of same type) rather than just==(i.e., equal, after automatic type conversion). It works only when the original function is not meant to return a boolean value, and still requires that information about the error be conveyed via other means.
InHaskelland otherfunctional programminglanguages, it is common to use a data type that is just as big as it needs to be to express any possible result. For example, one can write a division function that returned the typeMaybe Real, and agetcharfunction returningEither String Char. The first is anoption type, which has only one failure value,Nothing. The second case is atagged union: a result is either some string with a descriptive error message or a successfully read character. Haskell'stype inferencesystem helps ensure that callers deal with possible errors. Since the error conditions become explicit in the function type, looking at its signature immediately tells the programmer how to treat errors. Further, tagged unions and option types formmonadswhen endowed with appropriate functions: this may be used to keep the code tidy by automatically propagating unhandled error conditions.
Rusthasalgebraic data typesand comes with the built-inResult<T, E>andOption<T>types.
TheC++programming language introducedstd::optional<T>in theC++17update.
andstd::expected<T, E>in theC++23update
|
https://en.wikipedia.org/wiki/Semipredicate_problem
|
NXLog[1]is a multi-platform log collection and centralization tool that offers log processing features, including log enrichment (parsing, filtering, and conversion) and log forwarding.[2]In concept NXLog is similar tosyslog-ngorRsyslogbut it is not limited toUNIXandsyslogonly. It supports all majoroperating systemssuch as Windows,[3]macOS,[4]IBM AIX,[5]etc., being compatible with virtually anySIEM, log analytics suites and many other platforms. NXLog can handle different log sources and formats,[6]so it can be used to implement a secured, centralized,[7]scalable logging system.NXLog Community Editionis proprietary and can be downloaded free of charge with no license costs or limitations.[8]
NXLog can be installed on many operating systems and it is enabled to operate in a heterogeneous environment, collecting event logs from thousands of different sources in many formats. NXLog can accept event logs fromTCP,UDP,[9]file, database and various other sources in different formats such assyslog, windows event log, etc.[10]It supports SSL/TLS encryption to make sure data security in transit.
It can perform log rewrite, correlation, alerting, and pattern matching, it can execute scheduled jobs, and can perform log rotation. It was designed to be able to fully utilize modern multi-coreCPUsystems. Its multi-threaded architecture enables input, log processing and output tasks to be executed in parallel. Using an I/O layer it is capable of handling thousands of simultaneous client connections and process log volumes above the 100,000 EPS range.
NXLog does not drop any log messages unless instructed to. It can process input sources in a prioritized order, meaning that a higher priority source will be always processed before others. This can further help avoidingUDPmessage loss for example. In case of network congestion or other log transmission problems, NXLog can buffer messages on the disk or in memory. Using loadable modules it supports different input sources and log formats, not only limited tosyslogbut windows event log, audit logs, and custom binary application logs.
With NXLog it is possible to use custom loadable modules similarly to the Apache Web server. In addition to the online log processing mode, it can be used to process logs in batch mode in an offline fashion. NXLog's configuration language, with an Apache style configuration file syntax, enables it to rewrite logs, send alerts or execute any external script based on the specified criteria.
Back in 2009 the developer of NXlog was using a modified version of msyslog to suit his needs, but when he found a requirement to implement a high performance, scalable, centralized log management solution, there was no such modern logging solution available. There were some alternatives to msyslog with some nice features (e.g.Rsyslog,syslog-ng, etc.), but none of them qualified. Most of these were still single threaded,syslogoriented, without native support for MS Windows, and came with an ambiguous configuration syntax, ugly source-code and so on.
He decided to design and write NXLog from scratch, instead of hacking something else. Thus, NXLog was born in 2009 and was a closed source product in the beginning, heavily used in several production deployments. The source code of NXLOG Community Edition was released in November 2011, and has been freely available since.
Most log processing solutions are built around the same concept. The input is read from a source, then the log messages are processed. Finally output is written or sent to a sink in other terminology.
When an event occurs in an application or a device, depending on its configuration, a log message is emitted. This is usually referred to as an "event log" or "log message". These log messages can have different formats and can be transmitted over different protocols depending on the actual implementation.
There is one thing common in all event log messages. All contain important data such as user names, IP addresses, application names, etc. This way an event can be represented as a list of key-value pairs which we call a "field". The name of the field is the key and the field data is the value. In another terminology this meta-data is sometimes referred to as event property or message tag.
The following example illustrates a syslog message:
The fields extracted from this message are as follows:
NXLog will try to use theCommon Event Expression standardfor the field names once the standard is stable.
NXLog has a special field$, raw_event. This field is handled by the transport (UDP,TCP, File, etc.) modules to read input into and write output from it. This field is also used later to parse the log message into further fields by various functions, procedures and modules.
By utilizing loadable modules, the plugin architecture of NXLog allows it to read data from any kind of input, parse and convert the format of the messages, and then send it to any kind of output. Different input, processor and output modules can be used at the same time to cover all the requirements of the logging environment. The following figure illustrates the flow of log messages using this architecture.
The core of NXLog is responsible for parsing the configuration file, monitoring files and sockets, and managing internal events. It has an event based architecture, all modules can dispatch events to the core. The NXLog core will take care of the event and will optionally pass it to a module for processing. NXLog is a multi-threaded application, the main thread is responsible for monitoring files and sockets. These are added to the core by the different input and output modules. There is a dedicated thread handling internal events. It sleeps until the next event is to be processed then wakes up and dispatches the event to a worker thread. NXLog implements a worker thread-pool model. Worker threads receive an event which must be processed immediately. This way the NXLog core can centrally control all events and the order of their execution making prioritized processing possible. Modules which handle sockets or files are written to use non-blocking I/O in order to ensure that the worker threads never block. The files and sockets monitored by the main thread also dispatch events which are then delegated to the workers. Each event belonging to the same module is executed in sequential order, not concurrently. This ensures that message order is kept and prevents concurrency issues in modules. Yet the modules (worker threads) run concurrently, thus the global log processing flow is greatly parallelized.
When an input module receives data, it creates an internal representation of the log message which is basically a structure containing the raw event data and any optional fields. This log message is then pushed to the queue of the next module in the route and an internal event is generated to signal the availability of the data. The next module after the input module in a route, can be either a processor module or an output module. Actually an input or output module can also process data through built-in code or using the NXLog language execution framework. The only difference is that processor modules are run in another worker thread, thus parallelizing log processing even more. Considering that processor modules can also be chained, this can efficiently distribute work among multipleCPUsorCPUcores in the system.
NXLog Community Edition is licensed under the NXLOG PUBLIC LICENSE v1.0.[12]
|
https://en.wikipedia.org/wiki/NXLog
|
Aversioning file systemis any computerfile systemwhich allows acomputer fileto exist in several versions at the same time. Thus it is a form ofrevision control. Most common versioning file systems keep a number of old copies of the file. Some limit the number of changes per minute or per hour to avoid storing large numbers of trivial changes. Others instead take periodic snapshots whose contents can be accessed using methods similar as those for normal file access.
A versioning file system is similar to a periodicbackup, with several key differences.
Versioning file systems provide some of the features ofrevision control systems. However, unlike most revision control systems, they are transparent to users, not requiring a separate "commit" step to record a new revision.
Versioning file systems should not be confused withjournaling file systems. Whereasjournaling file systemswork by keeping a log of the changes made to a file before committing those changes to that file system (and overwriting the prior version), a versioning file system keeps previous copies of a file when saving new changes. The two features serve different purposes and are not mutually exclusive.
Someobject storageimplementations offers object versioning, such asAmazon S3.
An early implementation of versioning, possibly the first, was in MIT'sITS. In ITS, a filename consisted of two six-character parts; if the second part was numeric (consisted only of digits), it was treated as a version number. When specifying a file to open for read or write, one could supply a second part of ">"; when reading, this meant to open the highest-numbered version of the file; when writing, it meant to increment the highest existing version number and create the new version for writing.
Another early implementation of versioning was inTENEX, which becameTOPS-20.[1]
A powerful example of a file versioning system is built into theRSX-11andOpenVMSoperating system fromDigital Equipment Corporation. In essence, whenever an application opens a file for writing, the file system automatically creates a new instance of the file, with a version number appended to the name. Version numbers start at 1 and count upward as new instances of a file are created. When an application opens a file for reading, it can either specify the exact file name including version number, or just the file name without the version number, in which case the most recent instance of the file is opened.
The "purge"DCL/CCLcommand can be used at any time to manage the number of versions in a specific directory. By default, all but the highest numbered versions of all files in the current directory will be deleted; this behavior can be overridden with the /keep=n switch and/or by specifying directory path(s) and/or filename patterns. VMS systems are often scripted to purge user directories on a regular schedule; this is sometimes misconstrued by end-users as a property of the versioning system.
On February 8, 2004, Kiran-Kumar Muniswamy-Reddy, Charles P. Wright, Andrew Himmer, and Erez Zadok (all fromStony Brook University) proposed a stackable file system Versionfs, providing a versioning layer on top of any other Linux file systems.[3]
The Lisp Machine File System supports versioning. This was provided by implementations from MIT, LMI, Symbolics and Texas Instruments. Such an operating system wasSymbolics Genera.
Starting withLion(10.7),macOShas a feature calledVersionswhich allowsTime Machine-like saving and browsing of past versions of documents for applications written to use Versions. This functionality, however, takes place at the application layer, not the filesystem layer;[4]Lion and later releases do not incorporate a true versioning file system.
HTFS, adopted as the primary filesystem forSCO OpenServerin 1995, supports file versioning. Versioning is enabled on a per-directory basis by setting the directory's setuid bit, which is inherited when subdirectories are created. If versioning is enabled, a new file version is created when a file or directory is removed, or when an existing file is opened with truncation. Non-current versions remain in the filesystem namespace, under the name of the original file but with a suffix attached consisting of a semicolon and version sequence number. All but the current version are hidden from directory reads (unless the SHOWVERSIONS environment variable is set), but versions are otherwise accessible for all normal operations. The environment variable and general accessibility allow versions to be managed with the usual filesystem utilities, though there is also an "undelete" command that can be used to purge and restore files, enable and disable versioning on directories, etc.
The following are not versioning filesystems, but allow similar functionality.
|
https://en.wikipedia.org/wiki/Versioning_file_system
|
WAP Binary XML(WBXML) is a binary representation ofXML. It was developed by theWAP Forumand since 2002 is maintained by theOpen Mobile Allianceas a standard to allow XML documents to be transmitted in a compact manner over mobile networks and proposed as an addition to theWorld Wide Web Consortium'sWireless Application Protocolfamily of standards. TheMIMEmedia typeapplication/vnd.wap.wbxml has been defined for documents that use WBXML.
WBXML is used by a number ofmobile phones. Usage includesExchange ActiveSyncfor synchronizing device settings, address book, calendar, notes and emails,SyncMLfor transmitting address book and calendar data,Wireless Markup Language,Wireless Village,OMA DRMfor its rights language andOver-the-air programmingfor sending network settings to a phone.
Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/WBXML
|
Incomputing, anatural user interface(NUI) ornatural interfaceis auser interfacethat is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples includevoice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles of furniture.[1]
An NUI relies on a user being able to quickly transition from novice to expert. While the interface requires learning, that learning is eased through design which gives the user the feeling that they are instantly and continuously successful. Thus, "natural" refers to a goal in the user experience – that the interaction comes naturally, while interacting with the technology, rather than that the interface itself is natural. This is contrasted with the idea of anintuitive interface, referring to one that can be used without previous learning.
Several design strategies have been proposed which have met this goal to varying degrees of success. One strategy is the use of a "reality user interface" ("RUI"),[2]also known as "reality-based interfaces" (RBI) methods. One example of an RUI strategy is to use awearable computerto render real-world objects "clickable", i.e. so that the wearer can click on any everyday object so as to make it function as a hyperlink, thus merging cyberspace and the real world. Because the term "natural" is evocative of the "natural world", RBI are often confused for NUI, when in fact they are merely one means of achieving it.
One example of a strategy for designing a NUI not based in RBI is the strict limiting of functionality and customization, so that users have very little to learn in the operation of a device. Provided that the default capabilities match the user's goals, the interface is effortless to use. This is an overarching design strategy in Apple's iOS.[citation needed]Because this design is coincident with a direct-touch display, non-designers commonly misattribute the effortlessness of interacting with the device to that multi-touch display, and not to the design of the software where it actually resides.
In the 1990s,Steve Manndeveloped a number of user-interface strategies using natural interaction with the real world as an alternative to acommand-line interface(CLI) orgraphical user interface(GUI). Mann referred to this work as "natural user interfaces", "Direct User Interfaces", and "metaphor-free computing".[3]Mann'sEyeTaptechnology typically embodies an example of a natural user interface. Mann's use of the word "Natural" refers to both action that comes naturally to human users, as well as the use of nature itself, i.e. physics (Natural Philosophy), and the natural environment. A good example of an NUI in both these senses is thehydraulophone, especially when it is used as an input device, in which touching a natural element (water) becomes a way of inputting data. More generally, a class of musical instruments called "physiphones", so-named from the Greek words "physika", "physikos" (nature) and "phone" (sound) have also been proposed as "Nature-based user interfaces".[4]
In 2006, Christian Moore established anopen researchcommunity with the goal to expand discussion and development related to NUI technologies.[5]In a 2008 conference presentation "Predicting the Past," August de los Reyes, a Principal User Experience Director of Surface Computing at Microsoft described the NUI as the next evolutionary phase following the shift from the CLI to the GUI.[6]Of course, this too is an over-simplification, since NUIs necessarily include visual elements – and thus, graphical user interfaces. A more accurate description of this concept would be to describe it as a transition fromWIMPto NUI.
In the CLI, users had to learn an artificial means of input, the keyboard, and a series of codified inputs, that had a limited range of responses, where the syntax of those commands was strict.
Then, when the mouse enabled the GUI, users could more easily learn the mouse movements and actions, and were able to explore the interface much more. The GUI relied on metaphors for interacting with on-screen content or objects. The 'desktop' and 'drag' for example, being metaphors for a visual interface that ultimately was translated back into the strict codified language of the computer.
An example of the misunderstanding of the term NUI was demonstrated at theConsumer Electronics Showin 2010. "Now a new wave of products is poised to bring natural user interfaces, as these methods of controlling electronics devices are called, to an even broader audience."[7]
In 2010, Microsoft's Bill Buxton reiterated the importance of the NUI within Microsoft Corporation with a video discussing technologies which could be used in creating a NUI, and its future potential.[8]
In 2010, Daniel Wigdor and Dennis Wixon provided an operationalization of building natural user interfaces in their book.[9]In it, they carefully distinguish between natural user interfaces, the technologies used to achieve them, and reality-based UI.
WhenBill Buxtonwas asked about the iPhone's interface, he responded "Multi-touch technologies have a long history. To put it in perspective, the original work undertaken by my team was done in 1984, the same year that the first Macintosh computer was released, and we were not the first."[10]
Multi-Touch is a technology which could enable a natural user interface. However, most UI toolkits used to construct interfaces executed with such technology are traditional GUIs.
One example is the work done byJefferson Hanonmulti-touchinterfaces. In a demonstration at TED in 2006, he showed a variety of means of interacting with on-screen content using both direct manipulations and gestures. For example, to shape an on-screen glutinous mass, Jeff literally 'pinches' and prods and pokes it with his fingers. In a GUI interface for a design application for example, a user would use the metaphor of 'tools' to do this, for example, selecting a prod tool, or selecting two parts of the mass that they then wanted to apply a 'pinch' action to. Han showed that user interaction could be much more intuitive by doing away with the interaction devices that we are used to and replacing them with a screen that was capable of detecting a much wider range of human actions and gestures. Of course, this allows only for a very limited set of interactions which map neatly onto physical manipulation (RBI). Extending the capabilities of the software beyond physical actions requires significantly more design work.
Microsoft PixelSensetakes similar ideas on how users interact with content, but adds in the ability for the device to optically recognize objects placed on top of it. In this way, users can trigger actions on the computer through the same gestures and motions as Jeff Han's touchscreen allowed, but also objects become a part of the control mechanisms. So for example, when you place a wine glass on the table, the computer recognizes it as such and displays content associated with that wine glass. Placing a wine glass on a table maps well onto actions taken with wine glasses and other tables, and thus maps well onto reality-based interfaces. Thus, it could be seen as an entrée to a NUI experience.
"3D Immersive Touch" is defined as the direct manipulation of 3D virtual environment objects using single or multi-touch surface hardware in multi-user 3D virtual environments. Coined first in 2007 to describe and define the 3D natural user interface learning principles associated with Edusim. Immersive Touch natural user interface now appears to be taking on a broader focus and meaning with the broader adaption of surface and touch driven hardware such as the iPhone, iPod touch, iPad, and a growing list of other hardware. Apple also seems to be taking a keen interest in “Immersive Touch” 3D natural user interfaces over the past few years. This work builds atop the broad academic base which has studied 3D manipulation in virtual reality environments.
Kinectis amotion sensinginput devicebyMicrosoftfor theXbox 360video game consoleandWindowsPCsthat uses spatialgesturesfor interaction instead of agame controller. According toMicrosoft's page,Kinectis designed for "a revolutionary new way to play: no controller required.".[11]Again, because Kinect allows the sensing of the physical world, it shows potential for RBI designs, and thus potentially also for NUI.
|
https://en.wikipedia.org/wiki/Natural_user_interface
|
Inmathematics, thedistributive propertyofbinary operationsis a generalization of thedistributive law, which asserts that the equalityx⋅(y+z)=x⋅y+x⋅z{\displaystyle x\cdot (y+z)=x\cdot y+x\cdot z}is always true inelementary algebra.
For example, inelementary arithmetic, one has2⋅(1+3)=(2⋅1)+(2⋅3).{\displaystyle 2\cdot (1+3)=(2\cdot 1)+(2\cdot 3).}Therefore, one would say thatmultiplicationdistributesoveraddition.
This basic property of numbers is part of the definition of mostalgebraic structuresthat have two operations called addition and multiplication, such ascomplex numbers,polynomials,matrices,rings, andfields. It is also encountered inBoolean algebraandmathematical logic, where each of thelogical and(denoted∧{\displaystyle \,\land \,}) and thelogical or(denoted∨{\displaystyle \,\lor \,}) distributes over the other.
Given asetS{\displaystyle S}and twobinary operators∗{\displaystyle \,*\,}and+{\displaystyle \,+\,}onS,{\displaystyle S,}
x∗(y+z)=(x∗y)+(x∗z);{\displaystyle x*(y+z)=(x*y)+(x*z);}
(y+z)∗x=(y∗x)+(z∗x);{\displaystyle (y+z)*x=(y*x)+(z*x);}
When∗{\displaystyle \,*\,}iscommutative, the three conditions above arelogically equivalent.
The operators used for examples in this section are those of the usualaddition+{\displaystyle \,+\,}andmultiplication⋅.{\displaystyle \,\cdot .\,}
If the operation denoted⋅{\displaystyle \cdot }is not commutative, there is a distinction between left-distributivity and right-distributivity:
a⋅(b±c)=a⋅b±a⋅c(left-distributive){\displaystyle a\cdot \left(b\pm c\right)=a\cdot b\pm a\cdot c\qquad {\text{ (left-distributive) }}}(a±b)⋅c=a⋅c±b⋅c(right-distributive).{\displaystyle (a\pm b)\cdot c=a\cdot c\pm b\cdot c\qquad {\text{ (right-distributive) }}.}
In either case, the distributive property can be described in words as:
To multiply asum(ordifference) by a factor, each summand (orminuendandsubtrahend) is multiplied by this factor and the resulting products are added (or subtracted).
If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply ofdistributivity.
One example of an operation that is "only" right-distributive is division, which is not commutative:(a±b)÷c=a÷c±b÷c.{\displaystyle (a\pm b)\div c=a\div c\pm b\div c.}In this case, left-distributivity does not apply:a÷(b±c)≠a÷b±a÷c{\displaystyle a\div (b\pm c)\neq a\div b\pm a\div c}
The distributive laws are among the axioms forrings(like the ring ofintegers) andfields(like the field ofrational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication. Examples of structures with two operations that are each distributive over the other areBoolean algebrassuch as thealgebra of setsor theswitching algebra.
Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products.
In the following examples, the use of the distributive law on the set of real numbersR{\displaystyle \mathbb {R} }is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form afield, which ensures the validity of the distributive law.
The distributive law is valid formatrix multiplication. More precisely,(A+B)⋅C=A⋅C+B⋅C{\displaystyle (A+B)\cdot C=A\cdot C+B\cdot C}for alll×m{\displaystyle l\times m}-matricesA,B{\displaystyle A,B}andm×n{\displaystyle m\times n}-matricesC,{\displaystyle C,}as well asA⋅(B+C)=A⋅B+A⋅C{\displaystyle A\cdot (B+C)=A\cdot B+A\cdot C}for alll×m{\displaystyle l\times m}-matricesA{\displaystyle A}andm×n{\displaystyle m\times n}-matricesB,C.{\displaystyle B,C.}Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws.
In standard truth-functional propositional logic,distribution[3][4]in logical proofs uses two validrules of replacementto expand individual occurrences of certainlogical connectives, within someformula, into separate applications of those connectives across subformulas of the given formula. The rules are(P∧(Q∨R))⇔((P∧Q)∨(P∧R))and(P∨(Q∧R))⇔((P∨Q)∧(P∨R)){\displaystyle (P\land (Q\lor R))\Leftrightarrow ((P\land Q)\lor (P\land R))\qquad {\text{ and }}\qquad (P\lor (Q\land R))\Leftrightarrow ((P\lor Q)\land (P\lor R))}where "⇔{\displaystyle \Leftrightarrow }", also written≡,{\displaystyle \,\equiv ,\,}is ametalogicalsymbolrepresenting "can be replaced in a proof with" or "islogically equivalentto".
Distributivityis a property of some logical connectives of truth-functionalpropositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functionaltautologies.(P∧(Q∨R))⇔((P∧Q)∨(P∧R))Distribution ofconjunctionoverdisjunction(P∨(Q∧R))⇔((P∨Q)∧(P∨R))Distribution ofdisjunctionoverconjunction(P∧(Q∧R))⇔((P∧Q)∧(P∧R))Distribution ofconjunctionoverconjunction(P∨(Q∨R))⇔((P∨Q)∨(P∨R))Distribution ofdisjunctionoverdisjunction(P→(Q→R))⇔((P→Q)→(P→R))Distribution ofimplication(P→(Q↔R))⇔((P→Q)↔(P→R))Distribution ofimplicationoverequivalence(P→(Q∧R))⇔((P→Q)∧(P→R))Distribution ofimplicationoverconjunction(P∨(Q↔R))⇔((P∨Q)↔(P∨R))Distribution ofdisjunctionoverequivalence{\displaystyle {\begin{alignedat}{13}&(P&&\;\land &&(Q\lor R))&&\;\Leftrightarrow \;&&((P\land Q)&&\;\lor (P\land R))&&\quad {\text{ Distribution of }}&&{\text{ conjunction }}&&{\text{ over }}&&{\text{ disjunction }}\\&(P&&\;\lor &&(Q\land R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\;\land (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\land &&(Q\land R))&&\;\Leftrightarrow \;&&((P\land Q)&&\;\land (P\land R))&&\quad {\text{ Distribution of }}&&{\text{ conjunction }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\lor &&(Q\lor R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\;\lor (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ disjunction }}\\&(P&&\to &&(Q\to R))&&\;\Leftrightarrow \;&&((P\to Q)&&\to (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ }}&&{\text{ }}\\&(P&&\to &&(Q\leftrightarrow R))&&\;\Leftrightarrow \;&&((P\to Q)&&\leftrightarrow (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ over }}&&{\text{ equivalence }}\\&(P&&\to &&(Q\land R))&&\;\Leftrightarrow \;&&((P\to Q)&&\;\land (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\lor &&(Q\leftrightarrow R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\leftrightarrow (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ equivalence }}\\\end{alignedat}}}
((P∧Q)∨(R∧S))⇔(((P∨R)∧(P∨S))∧((Q∨R)∧(Q∨S)))((P∨Q)∧(R∨S))⇔(((P∧R)∨(P∧S))∨((Q∧R)∨(Q∧S))){\displaystyle {\begin{alignedat}{13}&((P\land Q)&&\;\lor (R\land S))&&\;\Leftrightarrow \;&&(((P\lor R)\land (P\lor S))&&\;\land ((Q\lor R)\land (Q\lor S)))&&\\&((P\lor Q)&&\;\land (R\lor S))&&\;\Leftrightarrow \;&&(((P\land R)\lor (P\land S))&&\;\lor ((Q\land R)\lor (Q\land S)))&&\\\end{alignedat}}}
In approximate arithmetic, such asfloating-point arithmetic, the distributive property of multiplication (and division) over addition may fail because of the limitations ofarithmetic precision. For example, the identity1/3+1/3+1/3=(1+1+1)/3{\displaystyle 1/3+1/3+1/3=(1+1+1)/3}fails indecimal arithmetic, regardless of the number ofsignificant digits. Methods such asbanker's roundingmay help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable.
Distributivity is most commonly found insemirings, notably the particular cases ofringsanddistributive lattices.
A semiring has two binary operations, commonly denoted+{\displaystyle \,+\,}and∗,{\displaystyle \,*,}and requires that∗{\displaystyle \,*\,}must distribute over+.{\displaystyle \,+.}
A ring is a semiring with additive inverses.
Alatticeis another kind ofalgebraic structurewith two binary operations,∧and∨.{\displaystyle \,\land {\text{ and }}\lor .}If either of these operations distributes over the other (say∧{\displaystyle \,\land \,}distributes over∨{\displaystyle \,\lor }), then the reverse also holds (∨{\displaystyle \,\lor \,}distributes over∧{\displaystyle \,\land \,}), and the lattice is called distributive. See alsoDistributivity (order theory).
ABoolean algebracan be interpreted either as a special kind of ring (aBoolean ring) or a special kind of distributive lattice (aBoolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra.
Similar structures without distributive laws arenear-ringsandnear-fieldsinstead of rings anddivision rings. The operations are usually defined to be distributive on the right but not on the left.
In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially inorder theoryone finds numerous important variants of distributivity, some of which include infinitary operations, such as theinfinite distributive law; others being defined in the presence of onlyonebinary operation, such as the according definitions and their relations are given in the articledistributivity (order theory). This also includes the notion of acompletely distributive lattice.
In the presence of an ordering relation, one can also weaken the above equalities by replacing={\displaystyle \,=\,}by either≤{\displaystyle \,\leq \,}or≥.{\displaystyle \,\geq .}Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion ofsub-distributivityas explained in the article oninterval arithmetic.
Incategory theory, if(S,μ,ν){\displaystyle (S,\mu ,\nu )}and(S′,μ′,ν′){\displaystyle \left(S^{\prime },\mu ^{\prime },\nu ^{\prime }\right)}aremonadson acategoryC,{\displaystyle C,}adistributive lawS.S′→S′.S{\displaystyle S.S^{\prime }\to S^{\prime }.S}is anatural transformationλ:S.S′→S′.S{\displaystyle \lambda :S.S^{\prime }\to S^{\prime }.S}such that(S′,λ){\displaystyle \left(S^{\prime },\lambda \right)}is alax map of monadsS→S{\displaystyle S\to S}and(S,λ){\displaystyle (S,\lambda )}is acolax map of monadsS′→S′.{\displaystyle S^{\prime }\to S^{\prime }.}This is exactly the data needed to define a monad structure onS′.S{\displaystyle S^{\prime }.S}: the multiplication map isS′μ.μ′S2.S′λS{\displaystyle S^{\prime }\mu .\mu ^{\prime }S^{2}.S^{\prime }\lambda S}and the unit map isη′S.η.{\displaystyle \eta ^{\prime }S.\eta .}See:distributive law between monads.
Ageneralized distributive lawhas also been proposed in the area ofinformation theory.
The ubiquitousidentitythat relates inverses to the binary operation in anygroup, namely(xy)−1=y−1x−1,{\displaystyle (xy)^{-1}=y^{-1}x^{-1},}which is taken as an axiom in the more general context of asemigroup with involution, has sometimes been called anantidistributive property(of inversion as aunary operation).[5]
In the context of anear-ring, which removes the commutativity of the additively written group and assumes only one-sided distributivity, one can speak of (two-sided)distributive elementsbut also ofantidistributive elements. The latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements distribute when multiplied on the left), then an antidistributive elementa{\displaystyle a}reverses the order of addition when multiplied to the right:(x+y)a=ya+xa.{\displaystyle (x+y)a=ya+xa.}[6]
In the study ofpropositional logicandBoolean algebra, the termantidistributive lawis sometimes used to denote the interchange between conjunction and disjunction when implication factors over them:[7](a∨b)⇒c≡(a⇒c)∧(b⇒c){\displaystyle (a\lor b)\Rightarrow c\equiv (a\Rightarrow c)\land (b\Rightarrow c)}(a∧b)⇒c≡(a⇒c)∨(b⇒c).{\displaystyle (a\land b)\Rightarrow c\equiv (a\Rightarrow c)\lor (b\Rightarrow c).}
These twotautologiesare a direct consequence of the duality inDe Morgan's laws.
|
https://en.wikipedia.org/wiki/Distributivity
|
Typographical syntax, also known asorthotypography, is the aspect oftypographythat defines the meaning and rightful usage oftypographic signs, notablypunctuation marks, and elements oflayoutsuch asflush marginsandindentation.[1][2]
Orthotypographic rules vary broadly fromlanguageto language, from country to country, and even frompublisherto publisher.[citation needed]As such, they are more often described as "conventions".
While some of those conventions have ease of understanding as a justification – for instance, specifying that low punctuation (commas,full stops, andellipses) must be in the sametypeface, weight, and style as the preceding text – many are probablyarbitrary.[citation needed]
The rules dealing withquotation marksare a good example of this: which ones to use and how to nest them, how muchwhitespaceto leave on both sides, and when to integrate them with other punctuation marks.
Each major publisher maintains a list of orthotypographic rules that they apply as part of theirhouse style.[3]
Thistypography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Typographical_syntax
|
Innumerical analysis, theNewton–Raphson method, also known simply asNewton's method, named afterIsaac NewtonandJoseph Raphson, is aroot-finding algorithmwhich produces successively betterapproximationsto theroots(or zeroes) of areal-valuedfunction. The most basic version starts with areal-valued functionf, itsderivativef′, and an initial guessx0for arootoff. Iffsatisfies certain assumptions and the initial guess is close, then
x1=x0−f(x0)f′(x0){\displaystyle x_{1}=x_{0}-{\frac {f(x_{0})}{f'(x_{0})}}}
is a better approximation of the root thanx0. Geometrically,(x1, 0)is thex-interceptof thetangentof thegraphoffat(x0,f(x0)): that is, the improved guess,x1, is the unique root of thelinear approximationoffat the initial guess,x0. The process is repeated as
xn+1=xn−f(xn)f′(xn){\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}}
until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class ofHouseholder's methods, and was succeeded byHalley's method. The method can also be extended tocomplex functionsand tosystems of equations.
The purpose of Newton's method is to find a root of a function. The idea is to start with an initial guess at a root, approximate the function by itstangent linenear the guess, and then take the root of the linear approximation as a next guess at the function's root. This will typically be closer to the function's root than the previous guess, and the method can beiterated.
The bestlinear approximationto an arbitrarydifferentiable functionf(x){\displaystyle f(x)}near the pointx=xn{\displaystyle x=x_{n}}is the tangent line to the curve, with equation
f(x)≈f(xn)+f′(xn)(x−xn).{\displaystyle f(x)\approx f(x_{n})+f'(x_{n})(x-x_{n}).}
The root of this linear function, the place where it intercepts thex{\displaystyle x}-axis, can be taken as a closer approximate rootxn+1{\displaystyle x_{n+1}}:
xn+1=xn−f(xn)f′(xn).{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.}
The process can be started with any arbitrary initial guessx0{\displaystyle x_{0}}, though it will generally require fewer iterations to converge if the guess is close to one of the function's roots. The method will usually converge iff′(x0)≠0{\displaystyle f'(x_{0})\neq 0}. Furthermore, for a root ofmultiplicity1, the convergence is at least quadratic (seeRate of convergence) in some sufficiently smallneighbourhoodof the root: the number of correct digits of the approximation roughly doubles with each additional step. More details can be found in§ Analysisbelow.
Householder's methodsare similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly iff{\displaystyle f}or its derivatives are computationally expensive to evaluate.
In theOld Babylonianperiod (19th–16th century BCE), the side of a square of known area could be effectively approximated, and this is conjectured to have been done using a special case of Newton's method,described algebraically below, by iteratively improving an initial estimate; an equivalent method can be found inHeron of Alexandria'sMetrica(1st–2nd century CE), so is often calledHeron's method.[1]Jamshīd al-Kāshīused a method to solvexP−N= 0to find roots ofN, a method that was algebraically equivalent to Newton's method, and in which a similar method was found inTrigonometria Britannica, published byHenry Briggsin 1633.[2]
The method first appeared roughly inIsaac Newton's work inDe analysi per aequationes numero terminorum infinitas(written in 1669, published in 1711 byWilliam Jones) and inDe metodis fluxionum et serierum infinitarum(written in 1671, translated and published asMethod of Fluxionsin 1736 byJohn Colson).[3][4]However, while Newton gave the basic ideas, his method differs from the modern method given above. He applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producingTaylor seriesin the latter case.
Newton may have derived his method from a similar, less precise method by mathematicianFrançois Viète, however, the two methods are not the same.[3]The essence of Viète's own method can be found in the work of the mathematicianSharaf al-Din al-Tusi.[5]
The Japanese mathematicianSeki Kōwaused a form of Newton's method in the 1680s to solve single-variable equations, though the connection with calculus was missing.[6]
Newton's method was first published in 1685 inA Treatise of Algebra both Historical and PracticalbyJohn Wallis.[7]In 1690,Joseph Raphsonpublished a simplified description inAnalysis aequationum universalis.[8]Raphson also applied the method only to polynomials, but he avoided Newton's tedious rewriting process by extracting each successive correction from the original polynomial. This allowed him to derive a reusable iterative expression for each problem. Finally, in 1740,Thomas Simpsondescribed Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.
Arthur Cayleyin 1879 inThe Newton–Fourier imaginary problemwas the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of thetheory of iterationsof rational functions.
Newton's method is a powerful technique—if the derivative of the function at the root is nonzero, then theconvergenceis at least quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method.
Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like thesecant methodwhose convergence is slower than that of Newton's method.
It is important to review theproof of quadratic convergenceof Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. Forsituations where the method fails to converge, it is because the assumptions made in this proof are not met.
For example,in some cases, if the first derivative is not well behaved in the neighborhood of a particular root, then it is possible that Newton's method will fail to converge no matter where the initialization is set. In some cases, Newton's method can be stabilized by usingsuccessive over-relaxation, or the speed of convergence can be increased by using the same method.
In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method.
If the root being sought hasmultiplicitygreater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicitymof the root is known, the following modified algorithm preserves the quadratic convergence rate:[9]
xn+1=xn−mf(xn)f′(xn).{\displaystyle x_{n+1}=x_{n}-m{\frac {f(x_{n})}{f'(x_{n})}}.}
This is equivalent to usingsuccessive over-relaxation. On the other hand, if the multiplicitymof the root is not known, it is possible to estimatemafter carrying out one or two iterations, and then use that value to increase the rate of convergence.
If the multiplicitymof the root is finite theng(x) =f(x)/f′(x)will have a root at the same location with multiplicity 1. Applying Newton's method to find the root ofg(x)recovers quadratic convergence in many cases although it generally involves the second derivative off(x). In a particularly simple case, iff(x) =xmtheng(x) =x/mand Newton's method finds the root in a single iteration with
xn+1=xn−g(xn)g′(xn)=xn−xnm1m=0.{\displaystyle x_{n+1}=x_{n}-{\frac {g(x_{n})}{g'(x_{n})}}=x_{n}-{\frac {\;{\frac {x_{n}}{m}}\;}{\frac {1}{m}}}=0\,.}
The functionf(x) =x2has a root at 0.[10]Sincefis continuously differentiable at its root, the theory guarantees that Newton's method as initialized sufficiently close to the root will converge. However, since the derivativef′is zero at the root, quadratic convergence is not ensured by the theory. In this particular example, the Newton iteration is given by
It is visible from this that Newton's method could be initialized anywhere and converge to zero, but at only a linear rate. If initialized at 1, dozens of iterations would be required before ten digits of accuracy are achieved.
The functionf(x) =x+x4/3also has a root at 0, where it is continuously differentiable. Although the first derivativef′is nonzero at the root, the second derivativef′′is nonexistent there, so that quadratic convergence cannot be guaranteed. In fact the Newton iteration is given by
From this, it can be seen that the rate of convergence is superlinear but subquadratic. This can be seen in the following tables, the left of which shows Newton's method applied to the abovef(x) =x+x4/3and the right of which shows Newton's method applied tof(x) =x+x2. The quadratic convergence in iteration shown on the right is illustrated by the orders of magnitude in the distance from the iterate to the true root (0,1,2,3,5,10,20,39,...) being approximately doubled from row to row. While the convergence on the left is superlinear, the order of magnitude is only multiplied by about 4/3 from row to row (0,1,2,4,5,7,10,13,...).
The rate of convergence is distinguished from the number of iterations required to reach a given accuracy. For example, the functionf(x) =x20− 1has a root at 1. Sincef′(1) ≠ 0andfis smooth, it is known that any Newton iteration convergent to 1 will converge quadratically. However, if initialized at 0.5, the first few iterates of Newton's method are approximately 26214, 24904, 23658, 22476, decreasing slowly, with only the 200th iterate being 1.0371. The following iterates are 1.0103, 1.00093, 1.0000082, and 1.00000000065, illustrating quadratic convergence. This highlights that quadratic convergence of a Newton iteration does not mean that only few iterates are required; this only applies once the sequence of iterates is sufficiently close to the root.[11]
The functionf(x) =x(1 +x2)−1/2has a root at 0. The Newton iteration is given by
From this, it can be seen that there are three possible phenomena for a Newton iteration. If initialized strictly between±1, the Newton iteration will converge (super-)quadratically to 0; if initialized exactly at1or−1, the Newton iteration will oscillate endlessly between±1; if initialized anywhere else, the Newton iteration will diverge.[12]This same trichotomy occurs forf(x) = arctanx.[10]
In cases where the function in question has multiple roots, it can be difficult to control, via choice of initialization, which root (if any) is identified by Newton's method. For example, the functionf(x) =x(x2− 1)(x− 3)e−(x− 1)2/2has roots at −1, 0, 1, and 3.[13]If initialized at −1.488, the Newton iteration converges to 0; if initialized at −1.487, it diverges to∞; if initialized at −1.486, it converges to −1; if initialized at −1.485, it diverges to−∞; if initialized at −1.4843, it converges to 3; if initialized at −1.484, it converges to1. This kind of subtle dependence on initialization is not uncommon; it is frequently studied in thecomplex planein the form of theNewton fractal.
Consider the problem of finding a root off(x) =x1/3. The Newton iteration is
Unless Newton's method is initialized at the exact root 0, it is seen that the sequence of iterates will fail to converge. For example, even if initialized at the reasonably accurate guess of 0.001, the first several iterates are −0.002, 0.004, −0.008, 0.016, reaching 1048.58, −2097.15, ... by the 20th iterate. This failure of convergence is not contradicted by the analytic theory, since in this casefis not differentiable at its root.
In the above example, failure of convergence is reflected by the failure off(xn)to get closer to zero asnincreases, as well as by the fact that successive iterates are growing further and further apart. However, the functionf(x) =x1/3e−x2also has a root at 0. The Newton iteration is given by
In this example, where againfis not differentiable at the root, any Newton iteration not starting exactly at the root will diverge, but with bothxn+ 1−xnandf(xn)converging to zero.[14]This is seen in the following table showing the iterates with initialization 1:
Although the convergence ofxn+ 1−xnin this case is not very rapid, it can be proved from the iteration formula. This example highlights the possibility that a stopping criterion for Newton's method based only on the smallness ofxn+ 1−xnandf(xn)might falsely identify a root.
It is easy to find situations for which Newton's method oscillates endlessly between two distinct values. For example, for Newton's method as applied to a functionfto oscillate between 0 and 1, it is only necessary that the tangent line tofat 0 intersects thex-axis at 1 and that the tangent line tofat 1 intersects thex-axis at 0.[14]This is the case, for example, iff(x) =x3− 2x+ 2. For this function, it is even the case that Newton's iteration as initialized sufficiently close to 0 or 1 willasymptoticallyoscillate between these values. For example, Newton's method as initialized at 0.99 yields iterates 0.99, −0.06317, 1.00628, 0.03651, 1.00196, 0.01162, 1.00020, 0.00120, 1.000002, and so on. This behavior is present despite the presence of a root offapproximately equal to −1.76929.
In some cases, it is not even possible to perform the Newton iteration. For example, iff(x) =x2− 1, then the Newton iteration is defined by
So Newton's method cannot be initialized at 0, since this would makex1undefined. Geometrically, this is because the tangent line tofat 0 is horizontal (i.e.f′(0) = 0), never intersecting thex-axis.
Even if the initialization is selected so that the Newton iteration can begin, the same phenomenon can block the iteration from being indefinitely continued.
Iffhas an incomplete domain, it is possible for Newton's method to send the iterates outside of the domain, so that it is impossible to continue the iteration.[14]For example, thenatural logarithmfunctionf(x) = lnxhas a root at 1, and is defined only for positivex. Newton's iteration in this case is given by
So if the iteration is initialized ate, the next iterate is 0; if the iteration is initialized at a value larger thane, then the next iterate is negative. In either case, the method cannot be continued.
Suppose that the functionfhas a zero atα, i.e.,f(α) = 0, andfis differentiable in aneighborhoodofα.
Iffis continuously differentiable and its derivative is nonzero atα, then there exists aneighborhoodofαsuch that for all starting valuesx0in that neighborhood, thesequence(xn)willconvergetoα.[15]
Iffis continuously differentiable, its derivative is nonzero atα,andit has asecond derivativeatα, then the convergence is quadratic or faster. If the second derivative is not 0 atαthen the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood ofα, then:
Δxi+1=f″(α)2f′(α)(Δxi)2+O(Δxi)3,{\displaystyle \Delta x_{i+1}={\frac {f''(\alpha )}{2f'(\alpha )}}\left(\Delta x_{i}\right)^{2}+O\left(\Delta x_{i}\right)^{3}\,,}
where
Δxi≜xi−α.{\displaystyle \Delta x_{i}\triangleq x_{i}-\alpha \,.}
If the derivative is 0 atα, then the convergence is usually only linear. Specifically, iffis twice continuously differentiable,f′(α) = 0andf″(α) ≠ 0, then there exists a neighborhood ofαsuch that, for all starting valuesx0in that neighborhood, the sequence of iterates converges linearly, withrate1/2.[16]Alternatively, iff′(α) = 0andf′(x) ≠ 0forx≠α,xin aneighborhoodUofα,αbeing a zero ofmultiplicityr, and iff∈Cr(U), then there exists a neighborhood ofαsuch that, for all starting valuesx0in that neighborhood, the sequence of iterates converges linearly.
However, even linear convergence is not guaranteed in pathological situations.
In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhoodU+ofα, iffis twice differentiable inU+and iff′≠ 0,f·f″> 0inU+, then, for eachx0inU+the sequencexkis monotonically decreasing toα.
According toTaylor's theorem, any functionf(x)which has a continuous second derivative can be represented by an expansion about a point that is close to a root off(x). Suppose this root isα. Then the expansion off(α)aboutxnis:
where theLagrange form of the Taylor series expansion remainderis
R1=12!f″(ξn)(α−xn)2,{\displaystyle R_{1}={\frac {1}{2!}}f''(\xi _{n})\left(\alpha -x_{n}\right)^{2}\,,}
whereξnis in betweenxnandα.
Sinceαis the root, (1) becomes:
Dividing equation (2) byf′(xn)and rearranging gives
Remembering thatxn+ 1is defined by
one finds that
α−xn+1⏟εn+1=−f″(ξn)2f′(xn)(α−xn⏟εn)2.{\displaystyle \underbrace {\alpha -x_{n+1}} _{\varepsilon _{n+1}}={\frac {-f''(\xi _{n})}{2f'(x_{n})}}{(\,\underbrace {\alpha -x_{n}} _{\varepsilon _{n}}\,)}^{2}\,.}
That is,
Taking the absolute value of both sides gives
Equation (6) shows that theorder of convergenceis at least quadratic if the following conditions are satisfied:
whereMis given by
M=12(supx∈I|f″(x)|)(supx∈I1|f′(x)|).{\displaystyle M={\frac {1}{2}}\left(\sup _{x\in I}\vert f''(x)\vert \right)\left(\sup _{x\in I}{\frac {1}{\vert f'(x)\vert }}\right).\,}
If these conditions hold,
|εn+1|≤M⋅εn2.{\displaystyle \vert \varepsilon _{n+1}\vert \leq M\cdot \varepsilon _{n}^{2}\,.}
Suppose thatf(x)is aconcave functionon an interval, which isstrictly increasing. If it is negative at the left endpoint and positive at the right endpoint, theintermediate value theoremguarantees that there is a zeroζoffsomewhere in the interval. From geometrical principles, it can be seen that the Newton iterationxistarting at the left endpoint ismonotonically increasingand convergent, necessarily toζ.[17]
Joseph Fourierintroduced a modification of Newton's method starting at the right endpoint:
This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit ofyimust also be the zeroζ.[17]
So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. Iffis twice continuously differentiable, it can be proved usingTaylor's theoremthat
showing that this difference in locations converges quadratically to zero.[17]
All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts ofmonotonicityand concavity are more subtle to formulate.[18]In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative off. However, in this generalization, Newton's iteration is modified so as to be based onTaylor polynomialsrather than thetangent line. In the case of concavity, this modification coincides with the standard Newton method.[19]
If we seek the root of a single functionf:Rn→R{\displaystyle f:\mathbf {R} ^{n}\to \mathbf {R} }then the errorϵn=xn−α{\displaystyle \epsilon _{n}=x_{n}-\alpha }is a vector such that its components obeyϵk(n+1)=12(ϵ(n))TQkϵ(n)+O(‖ϵ(n)‖3){\displaystyle \epsilon _{k}^{(n+1)}={\frac {1}{2}}(\epsilon ^{(n)})^{T}Q_{k}\epsilon ^{(n)}+O(\|\epsilon ^{(n)}\|^{3})}whereQk{\displaystyle Q_{k}}is a quadratic form:(Qk)i,j=∑ℓ((D2f)−1)i,ℓ∂3f∂xj∂xk∂xℓ{\displaystyle (Q_{k})_{i,j}=\sum _{\ell }((D^{2}f)^{-1})_{i,\ell }{\frac {\partial ^{3}f}{\partial x_{j}\partial x_{k}\partial x_{\ell }}}}evaluated at the rootα{\displaystyle \alpha }(whereD2f{\displaystyle D^{2}f}is the 2nd derivative Hessian matrix).
Newton's method is one of many knownmethods of computing square roots. Given a positive numbera, the problem of finding a numberxsuch thatx2=ais equivalent to finding a root of the functionf(x) =x2−a. The Newton iteration defined by this function is given by
This happens to coincide with the"Babylonian" method of finding square roots, which consists of replacing an approximate rootxnby thearithmetic meanofxnanda⁄xn. By performing this iteration, it is possible to evaluate a square root to any desired accuracy by only using the basicarithmetic operations.
The following three tables show examples of the result of this computation for finding the square root of 612, with the iteration initialized at the values of 1, 10, and −20. Each row in a "xn" column is obtained by applying the preceding formula to the entry above it, for instance
The correct digits are underlined. It is seen that with only a few iterations one can obtain a solution accurate to many decimal places. The first table shows that this is true even if the Newton iteration were initialized by the very inaccurate guess of1.
When computing any nonzero square root, the first derivative offmust be nonzero at the root, and thatfis a smooth function. So, even before any computation, it is known that any convergent Newton iteration has a quadratic rate of convergence. This is reflected in the above tables by the fact that once a Newton iterate gets close to the root, the number of correct digits approximately doubles with each iteration.
Consider the problem of finding the positive numberxwithcosx=x3. We can rephrase that as finding the zero off(x) = cos(x) −x3. We havef′(x) = −sin(x) − 3x2. Sincecos(x) ≤ 1for allxandx3> 1forx> 1, we know that our solution lies between 0 and 1.
A starting value of 0 will lead to an undefined result which illustrates the importance of using a starting point close to the solution. For example, with an initial guessx0= 0.5, the sequence given by Newton's method is:
x1=x0−f(x0)f′(x0)=0.5−cos0.5−0.53−sin0.5−3×0.52=1.112141637097…x2=x1−f(x1)f′(x1)=⋮=0._909672693736…x3=⋮=⋮=0.86_7263818209…x4=⋮=⋮=0.86547_7135298…x5=⋮=⋮=0.8654740331_11…x6=⋮=⋮=0.865474033102_…{\displaystyle {\begin{matrix}x_{1}&=&x_{0}-{\dfrac {f(x_{0})}{f'(x_{0})}}&=&0.5-{\dfrac {\cos 0.5-0.5^{3}}{-\sin 0.5-3\times 0.5^{2}}}&=&1.112\,141\,637\,097\dots \\x_{2}&=&x_{1}-{\dfrac {f(x_{1})}{f'(x_{1})}}&=&\vdots &=&{\underline {0.}}909\,672\,693\,736\dots \\x_{3}&=&\vdots &=&\vdots &=&{\underline {0.86}}7\,263\,818\,209\dots \\x_{4}&=&\vdots &=&\vdots &=&{\underline {0.865\,47}}7\,135\,298\dots \\x_{5}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,1}}11\dots \\x_{6}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,102}}\dots \end{matrix}}}
The correct digits are underlined in the above example. In particular,x6is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (forx3) to 5 and 10, illustrating the quadratic convergence.
One may also use Newton's method to solve systems ofkequations, which amounts to finding the (simultaneous) zeroes ofkcontinuously differentiable functionsf:Rk→R.{\displaystyle f:\mathbb {R} ^{k}\to \mathbb {R} .}This is equivalent to finding the zeroes of a single vector-valued functionF:Rk→Rk.{\displaystyle F:\mathbb {R} ^{k}\to \mathbb {R} ^{k}.}In the formulation given above, the scalarsxnare replaced by vectorsxnand instead of dividing the functionf(xn)by its derivativef′(xn)one instead has to left multiply the functionF(xn)by the inverse of itsk×kJacobian matrixJF(xn).[20][21][22]This results in the expression
xn+1=xn−JF(xn)−1F(xn).{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-J_{F}(\mathbf {x} _{n})^{-1}F(\mathbf {x} _{n}).}
or, by solving thesystem of linear equations
JF(xn)(xn+1−xn)=−F(xn){\displaystyle J_{F}(\mathbf {x} _{n})(\mathbf {x} _{n+1}-\mathbf {x} _{n})=-F(\mathbf {x} _{n})}
for the unknownxn+ 1−xn.[23]
Thek-dimensional variant of Newton's method can be used to solve systems of greater thank(nonlinear) equations as well if the algorithm uses thegeneralized inverseof the non-squareJacobianmatrixJ+= (JTJ)−1JTinstead of the inverse ofJ. If thenonlinear systemhas no solution, the method attempts to find a solution in thenon-linear least squaressense. SeeGauss–Newton algorithmfor more information.
For example, the following set of equations needs to be solved for vector of points[x1,x2],{\displaystyle \ [\ x_{1},x_{2}\ ]\ ,}given the vector of known values[2,3].{\displaystyle \ [\ 2,3\ ]~.}[24]
5x12+x1x22+sin2(2x2)=2e2x1−x2+4x2=3{\displaystyle {\begin{array}{lcr}5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})&=\quad 2\\e^{2\ x_{1}-x_{2}}+4\ x_{2}&=\quad 3\end{array}}}
the function vector,F(Xk),{\displaystyle \ F(X_{k})\ ,}and Jacobian Matrix,J(Xk){\displaystyle \ J(X_{k})\ }for iteration k, and the vector of known values,Y,{\displaystyle \ Y\ ,}are defined below.
F(Xk)=[f1(Xk)f2(Xk)]=[5x12+x1x22+sin2(2x2)e2x1−x2+4x2]kJ(Xk)=[∂f1(X)∂x1,∂f1(X)∂x2∂f2(X)∂x1,∂f2(X)∂x2]k=[10x1+x22,2x1x2+4sin(2x2)cos(2x2)2e2x1−x2,−e2x1−x2+4]kY=[23]{\displaystyle {\begin{aligned}~&F(X_{k})~=~{\begin{bmatrix}{\begin{aligned}~&f_{1}(X_{k})\\~&f_{2}(X_{k})\end{aligned}}\end{bmatrix}}~=~{\begin{bmatrix}{\begin{aligned}~&5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})\\~&e^{2\ x_{1}-x_{2}}+4\ x_{2}\end{aligned}}\end{bmatrix}}_{k}\\~&J(X_{k})={\begin{bmatrix}~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{2}}}}~\\~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{2}}}}~\end{bmatrix}}_{k}~=~{\begin{bmatrix}{\begin{aligned}~&10\ x_{1}+x_{2}^{2}\ ,&&2\ x_{1}\ x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})\\~&2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4\end{aligned}}\end{bmatrix}}_{k}\\~&Y={\begin{bmatrix}~2~\\~3~\end{bmatrix}}\end{aligned}}}
Note thatF(Xk){\displaystyle \ F(X_{k})\ }could have been rewritten to absorbY,{\displaystyle \ Y\ ,}and thus eliminateY{\displaystyle Y}from the equations. The equation to solve for each iteration are
[10x1+x22,2x1x2+4sin(2x2)cos(2x2)2e2x1−x2,−e2x1−x2+4]k[c1c2]k+1=[5x12+x1x22+sin2(2x2)−2e2x1−x2+4x2−3]k{\displaystyle {\begin{aligned}{\begin{bmatrix}{\begin{aligned}~&~10\ x_{1}+x_{2}^{2}\ ,&&2x_{1}x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})~\\~&~2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4~\end{aligned}}\end{bmatrix}}_{k}{\begin{bmatrix}~c_{1}~\\~c_{2}~\end{bmatrix}}_{k+1}={\begin{bmatrix}~5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})-2~\\~e^{2\ x_{1}-x_{2}}+4\ x_{2}-3~\end{bmatrix}}_{k}\end{aligned}}}
and
Xk+1=Xk−Ck+1{\displaystyle X_{k+1}~=~X_{k}-C_{k+1}}
The iterations should be repeated until[∑i=1i=2|f(xi)k−(yi)k|]<E,{\displaystyle \ {\Bigg [}\sum _{i=1}^{i=2}{\Bigl |}f(x_{i})_{k}-(y_{i})_{k}{\Bigr |}{\Bigg ]}<E\ ,}whereE{\displaystyle \ E\ }is a value acceptably small enough to meet application requirements.
If vectorX0{\displaystyle \ X_{0}\ }is initially chosen to be[11],{\displaystyle \ {\begin{bmatrix}~1~&~1~\end{bmatrix}}\ ,}that is,x1=1,{\displaystyle \ x_{1}=1\ ,}andx2=1,{\displaystyle \ x_{2}=1\ ,}andE,{\displaystyle \ E\ ,}is chosen to be 1.10−3, then the example converges after four iterations to a value ofX4=[0.567297,−0.309442].{\displaystyle \ X_{4}=\left[~0.567297,\ -0.309442~\right]~.}
The following iterations were made during the course of the solution.
When dealing withcomplex functions, Newton's method can be directly applied to find their zeroes.[25]Each zero has abasin of attractionin the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction arefractals.
In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example,[26]if one uses a real initial condition to seek a root ofx2+ 1, all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this casealmost allreal initial conditions lead tochaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length.
Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3.[27]Also, for any polynomial, Hubbard, Schleicher, and Sutherland gave a method for selecting a set of initial points such that Newton's method will certainly converge at one of them at least.[28]
Another generalization is Newton's method to find a root of afunctionalFdefined in aBanach space. In this case the formulation is
Xn+1=Xn−(F′(Xn))−1F(Xn),{\displaystyle X_{n+1}=X_{n}-{\bigl (}F'(X_{n}){\bigr )}^{-1}F(X_{n}),\,}
whereF′(Xn)is theFréchet derivativecomputed atXn. One needs the Fréchet derivative to be boundedly invertible at eachXnin order for the method to be applicable. A condition for existence of and convergence to a root is given by theNewton–Kantorovich theorem.[29]
In the 1950s,John Nashdeveloped a version of the Newton's method to apply to the problem of constructingisometric embeddingsof generalRiemannian manifoldsinEuclidean space. Theloss of derivativesproblem, present in this context, made the standard Newton iteration inapplicable, since it could not be continued indefinitely (much less converge). Nash's solution involved the introduction ofsmoothingoperators into the iteration. He was able to prove the convergence of his smoothed Newton method, for the purpose of proving animplicit function theoremfor isometric embeddings. In the 1960s,Jürgen Mosershowed that Nash's methods were flexible enough to apply to problems beyond isometric embedding, particularly incelestial mechanics. Since then, a number of mathematicians, includingMikhael GromovandRichard Hamilton, have found generalized abstract versions of the Nash–Moser theory.[30][31]In Hamilton's formulation, the Nash–Moser theorem forms a generalization of the Banach space Newton method which takes place in certainFréchet spaces.
When the Jacobian is unavailable or too expensive to compute at every iteration, aquasi-Newton methodcan be used.
Since higher-order Taylor expansions offer more accurate local approximations of a functionf, it is reasonable to ask why Newton’s method relies only on a second-order Taylor approximation. In the 19th century, Russian mathematician Pafnuty Chebyshev explored this idea by developing a variant of Newton’s method that used cubic approximations.[32][33][34]
Inp-adic analysis, the standard method to show a polynomial equation in one variable has ap-adic root isHensel's lemma, which uses the recursion from Newton's method on thep-adic numbers. Because of the more stable behavior of addition and multiplication in thep-adic numbers compared to the real numbers (specifically, the unit ball in thep-adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line.
Newton's method can be generalized with theq-analogof the usual derivative.[35]
A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already foundNsolutions off(x)=0{\displaystyle f(x)=0}, then the next root can be found by applying Newton's method to the next equation:[36][37]
F(x)=f(x)∏i=1N(x−xi)=0.{\displaystyle F(x)={\frac {f(x)}{\prod _{i=1}^{N}(x-x_{i})}}=0.}
This method is applied to obtain zeros of theBessel functionof the second kind.[38]
Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness.[39]It is developed to solve complex polynomials.
Combining Newton's method withinterval arithmeticis very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficientfloating-point precision(this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; seeWilkinson's polynomial).[40][41]
Considerf→C1(X), whereXis a real interval, and suppose that we have an interval extensionF′off′, meaning thatF′takes as input an intervalY⊆Xand outputs an intervalF′(Y)such that:
F′([y,y])={f′(y)}F′(Y)⊇{f′(y)∣y∈Y}.{\displaystyle {\begin{aligned}F'([y,y])&=\{f'(y)\}\\[5pt]F'(Y)&\supseteq \{f'(y)\mid y\in Y\}.\end{aligned}}}
We also assume that0 ∉F′(X), so in particularfhas at most one root inX.
We then define the interval Newton operator by:
N(Y)=m−f(m)F′(Y)={m−f(m)z|z∈F′(Y)}{\displaystyle N(Y)=m-{\frac {f(m)}{F'(Y)}}=\left\{\left.m-{\frac {f(m)}{z}}~\right|~z\in F'(Y)\right\}}
wherem∈Y. Note that the hypothesis onF′implies thatN(Y)is well defined and is an interval (seeinterval arithmeticfor further details on interval operations). This naturally leads to the following sequence:
X0=XXk+1=N(Xk)∩Xk.{\displaystyle {\begin{aligned}X_{0}&=X\\X_{k+1}&=N(X_{k})\cap X_{k}.\end{aligned}}}
Themean value theoremensures that if there is a root offinXk, then it is also inXk+ 1. Moreover, the hypothesis onF′ensures thatXk+ 1is at most half the size ofXkwhenmis the midpoint ofY, so this sequence converges towards[x*,x*], wherex*is the root offinX.
IfF′(X)strictly contains 0, the use of extended interval division produces a union of two intervals forN(X); multiple roots are therefore automatically separated and bounded.
Newton's method can be used to find a minimum or maximum of a functionf(x). The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative.[42]The iteration becomes:
xn+1=xn−f′(xn)f″(xn).{\displaystyle x_{n+1}=x_{n}-{\frac {f'(x_{n})}{f''(x_{n})}}.}
An important application isNewton–Raphson division, which can be used to quickly find thereciprocalof a numbera, using only multiplication and subtraction, that is to say the numberxsuch that1/x=a. We can rephrase that as finding the zero off(x) =1/x−a. We havef′(x) = −1/x2.
Newton's iteration is
xn+1=xn−f(xn)f′(xn)=xn+1xn−a1xn2=xn(2−axn).{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}+{\frac {{\frac {1}{x_{n}}}-a}{\frac {1}{x_{n}^{2}}}}=x_{n}(2-ax_{n}).}
Therefore, Newton's iteration needs only two multiplications and one subtraction.
This method is also very efficient to compute the multiplicative inverse of apower series.
Manytranscendental equationscan be solved up to an arbitrary precision by using Newton's method. For example, finding the cumulativeprobability density function, such as aNormal distributionto fit a known probability generally involves integral functions with no known means to solve in closed form. However, computing the derivatives needed to solve them numerically with Newton's method is generally known, making numerical solutions possible. For an example, see the numerical solution to theinverse Normal cumulative distribution.
A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates.[citation needed]
The following is an example of a possible implementation of Newton's method in thePython(version 3.x) programming language for finding a root of a functionfwhich has derivativef_prime.
The initial guess will bex0= 1and the function will bef(x) =x2− 2so thatf′(x) = 2x.
Each new iteration of Newton's method will be denoted byx1. We will check during the computation whether the denominator (yprime) becomes too small (smaller thanepsilon), which would be the case iff′(xn) ≈ 0, since otherwise a large amount of error could be introduced.
|
https://en.wikipedia.org/wiki/Newton%27s_method
|
One-key MAC(OMAC) is a family ofmessage authentication codesconstructed from ablock ciphermuch like theCBC-MACalgorithm. It may be used to provide assurance of the authenticity and, hence, the integrity of data. Two versions are defined:
OMAC is free for all uses: it is not covered by any patents.[4]
The core of the CMAC algorithm is a variation ofCBC-MACthatBlackandRogawayproposed and analyzed under the name "XCBC"[5]and submitted toNIST.[6]The XCBC algorithm efficiently addresses the security deficiencies of CBC-MAC, but requires three keys.
Iwata and Kurosawa proposed an improvement of XCBC that requires less key material (just one key) and named the resulting algorithmOne-Key CBC-MAC(OMAC) in their papers.[1]They later submitted the OMAC1 (= CMAC),[2]a refinement of OMAC, and additional security analysis.[7]
To generate anℓ-bit CMAC tag (t) of a message (m) using ab-bit block cipher (E) and a secret key (k), one first generates twob-bit sub-keys (k1andk2) using the following algorithm (this is equivalent to multiplication byxandx2in afinite fieldGF(2b)). Let ≪ denote the standard left-shift operator and ⊕ denote bit-wiseexclusive or:
As a small example, supposeb= 4,C= 00112, andk0=Ek(0) = 01012. Thenk1= 10102andk2= 0100 ⊕ 0011 = 01112.
The CMAC tag generation process is as follows:
The verification process is as follows:
CMAC-C1[8]is a variant of CMAC that provides additionalcommitment and context-discovery securityguarantees.
|
https://en.wikipedia.org/wiki/One-key_MAC
|
Acar, or anautomobile, is amotor vehiclewithwheels. Most definitions of cars state that they run primarily onroads,seatone to eight people, have four wheels, and mainly transportpeoplerather thancargo.[1][2]There are around one billion cars in use worldwide.[citation needed]
The French inventorNicolas-Joseph Cugnotbuilt the first steam-powered road vehicle in 1769, while the Swiss inventorFrançois Isaac de Rivazdesigned and constructed the first internal combustion-powered automobile in 1808. The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventorCarl Benzpatented hisBenz Patent-Motorwagen. Commercial cars became widely available during the 20th century. The 1901Oldsmobile Curved Dashand the 1908Ford Model T, both American cars, are widely considered the first mass-produced[3][4]and mass-affordable[5][6][7]cars, respectively. Cars were rapidly adopted in the US, where they replacedhorse-drawn carriages.[8]In Europe and other parts of the world, demand for automobiles did not increase untilafter World War II.[9]In the 21st century, car usage is still increasing rapidly, especially in China, India, and othernewly industrialised countries.[10][11]
Cars have controls fordriving,parking,passengercomfort, and a variety oflamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These includerear-reversing cameras,air conditioning,navigation systems, andin-car entertainment. Most cars in use in the early 2020s are propelled by aninternal combustion engine, fueled by thecombustionoffossil fuels.Electric cars, which were invented early in thehistory of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025.[12][13]The transition from fossil fuel-powered cars to electric cars features prominently in mostclimate change mitigation scenarios,[14]such asProject Drawdown's 100 actionable solutions for climate change.[15]
There arecosts and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs andmaintenance, fuel,depreciation, driving time, parking fees, taxes, andinsurance.[16]The costs to society include resources used to produce cars and fuel, maintaining roads,land-use,road congestion,air pollution,noise pollution,public health, anddisposing of the vehicle at the end of its life.Traffic collisionsare the largest cause of injury-related deaths worldwide.[17]Personal benefits include on-demand transportation, mobility, independence, and convenience.[18]Societal benefits include economic benefits, such as job and wealth creation from theautomotive industry, transportation provision, societal well-being from leisure and travel opportunities. People's ability to move flexibly from place to place hasfar-reaching implications for the nature of societies.[19]
TheEnglishwordcaris believed to originate fromLatincarrus/carrum"wheeled vehicle" or (viaOld North French)Middle Englishcarre"two-wheeled cart", both of which in turn derive fromGaulishkarros"chariot".[20][21]It originally referred to any wheeledhorse-drawn vehicle, such as acart,carriage, orwagon.[22]The word also occurs in other Celtic languages.[23]
"Motor car", attested from 1895, is the usual formal term inBritish English.[2]"Autocar", a variant likewise attested from 1895 and literally meaning "self-propelledcar", is now considered archaic.[24]"Horseless carriage" is attested from 1895.[25]
"Automobile", aclassical compoundderived fromAncient Greekautós(αὐτός) "self" and Latinmobilis"movable", entered English fromFrenchand was first adopted by theAutomobile Club of Great Britainin 1897.[26]It fell out of favour in Britain and is now used chiefly inNorth America,[27]where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".[28][29]
In 1649,Hans HautschofNurembergbuilt a clockwork-driven carriage.[32][33]The first steam-powered vehicle was designed byFerdinand Verbiest, aFlemishmember of aJesuit mission in Chinaaround 1672. It was a 65-centimetre-long (26 in) scale-model toy for theKangxi Emperorthat was unable to carry a driver or a passenger.[18][34][35]It is not known with certainty if Verbiest's model was successfully built or run.[35]
Nicolas-Joseph Cugnotis widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle.[36]He also constructed two steam tractors for the French Army, one of which is preserved in theFrench National Conservatory of Arts and Crafts.[36]His inventions were limited by problems with water supply and maintaining steam pressure.[36]In 1801,Richard Trevithickbuilt and demonstrated hisPuffing Devilroad locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of cars in their modern understanding. A variety of steam-powered road vehicles were used during the first part of the 19th century, includingsteam cars,steam buses,phaetons, andsteam rollers. In the United Kingdom, sentiment against them led to theLocomotive Actsof 1865.
In 1807,Nicéphore Niépceand his brother Claude created what was probably the world's firstinternal combustion engine(which they called aPyréolophore), but installed it in a boat on the riverSaonein France.[37]Coincidentally, in 1807, the Swiss inventorFrançois Isaac de Rivazdesigned his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture ofLycopodium powder(dried spores of theLycopodiumplant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture ofhydrogenandoxygen.[37]Neither design was successful, as was the case with others, such asSamuel Brown,Samuel Morey, andEtienne Lenoir,[38]who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.[39]
In November 1881, French inventorGustave Trouvédemonstrated a three-wheeled car powered by electricity at theInternational Exposition of Electricity.[40]Although several other German engineers (includingGottlieb Daimler,Wilhelm Maybach, andSiegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the GermanCarl Benzpatented hisBenz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.[39][41][42]
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His firstMotorwagenwas built in 1885 inMannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company,Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered withfour-strokeengines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888,Bertha Benz, the wife and business partner of Carl Benz, undertook the firstroad tripby car, to prove the road-worthiness of her husband's invention.[43]
In 1896, Benz designed and patented the first internal-combustionflat engine, calledboxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became ajoint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed toTatra) in 1897, thePräsidentautomobil.
Daimler and Maybach foundedDaimler Motoren Gesellschaft(DMG) inCannstattin 1890, and sold their first car in 1892 under the brand nameDaimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine namedDaimler-Mercedesthat was placed in a specially ordered model built to specifications set byEmil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to theDaimlerbrand name were sold to other manufacturers.
In 1890,Émile LevassorandArmand Peugeotof France began producing vehicles with Daimler engines, and so laid the foundation of theautomotive industry in France. In 1891,Auguste Doriotand his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler poweredPeugeot Type 3completed 2,100 kilometres (1,300 mi) fromValentigneyto Paris and Brest and back again. They were attached to the firstParis–Brest–Parisbicycle race, but finished six days after the winning cyclist,Charles Terront.
The first design for an American car with a petrol internal combustion engine was made in 1877 byGeorge SeldenofRochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent (U.S. patent 549,160) for atwo-strokecar engine,which hindered, more than encouraged, development of cars in the United States. His patent was challenged byHenry Fordand others, and overturned in 1911.
In 1893, the first running, petrol-drivenAmerican carwas built and road-tested by theDuryea brothersofSpringfield, Massachusetts. The first public run of theDuryea Motor Wagontook place on 21 September 1893, on Taylor Street inMetro CenterSpringfield.[44][45]Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897[46]: 66and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.[47]
In Britain, there had been several attempts to build steam cars with varying degrees of success, withThomas Ricketteven attempting a production run in 1860.[48]Santlerfrom Malvern is recognised by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894,[49]followed byFrederick William Lanchesterin 1895, but these were both one-offs.[49]The first production vehicles in Great Britain came from theDaimler Company, a company founded byHarry J. Lawsonin 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.[49]
In 1892, German engineerRudolf Dieselwas granted a patent for a "New Rational Combustion Engine". In 1897, he built the firstdiesel engine.[39]Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although variouspistonless rotary enginedesigns have attempted to compete with the conventionalpistonandcrankshaftdesign, onlyMazda's version of theWankel enginehas had more than very limited success. All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.[50]
Large-scale,production-linemanufacturing of affordable cars was started byRansom Oldsin 1901 at hisOldsmobilefactory inLansing, Michigan, and based upon stationaryassembly linetechniques pioneered byMarc Isambard Brunelat thePortsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US byThomas Blanchardin 1821, at theSpringfield ArmoryinSpringfield, Massachusetts.[51]This concept was greatly expanded byHenry Ford, beginning in 1913 with the world's firstmovingassembly line for cars at theHighland Park Ford Plant.
As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes).[52]It was so successful,paintbecame a bottleneck. OnlyJapan blackwould dry fast enough, forcing the company to drop the variety of colours available before 1913, until fast-dryingDucolacquerwas developed in 1926. This is the source of Ford'sapocryphalremark, "any color as long as it's black".[52]In 1914, an assembly line worker could buy a Model T with four months' pay.[52]
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury.[53]The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921,Citroënwas the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going bankrupt; by 1930, 250 companies which did not, had disappeared.[52]
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electricignitionand the electric self-starter (both byCharles Kettering, for theCadillacMotor Company in 1910–1911), independentsuspension, and four-wheel brakes.
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It wasAlfred P. Sloanwho established the idea of different makes of cars produced by one company, called theGeneral Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s,LaSalles, sold byCadillac, used cheaper mechanical parts made byOldsmobile; in the 1950s,Chevroletshared bonnet, doors, roof, and windows withPontiac; by the 1990s, corporatepowertrainsand sharedplatforms(with interchangeablebrakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such asApperson,Cole,Dorris,Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with theGreat Depression, by 1940, only 17 of those were left.[52]
In Europe, much the same would happen.Morrisset up its production line atCowleyin 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice ofvertical integration, buyingHotchkiss'British subsidiary (engines),Wrigley(gearboxes), and Osberton (radiators), for instance, as well as competitors, such asWolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, fromAbbeytoXtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such asRenault's 10CV andPeugeot's5CV, they produced 550,000 cars in 1925, andMors,Hurtu, and others could not compete.[52]Germany's first mass-manufactured car, theOpel 4PSLaubfrosch(Tree Frog), came off the line atRüsselsheimin 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.[52]
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, likeDaihatsu, or were the result of partnering with European companies, likeIsuzubuilding theWolseley A-9in 1922.Mitsubishiwas also partnered withFiatand built theMitsubishi Model Abased on a Fiat vehicle.Toyota,Nissan,Suzuki,Mazda, andHondabegan as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to takeToyoda Loom Worksinto automobile manufacturing would create what would eventually becomeToyota Motor Corporation, the largest automobile manufacturer in the world.Subaru, meanwhile, was formed from a conglomerate of six companies who banded together asFuji Heavy Industries, as a result of having been broken up underkeiretsulegislation.
Most cars in use in the early 2020s run onpetrolburnt in aninternal combustion engine(ICE). Some cities ban older more polluting petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say thisphase-out of fossil fuel vehiclesmust be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.[55][56]
Other hydrocarbon fossil fuels also burnt bydeflagration(rather thandetonation) in ICE cars includediesel,autogas, andCNG. Removal offossil fuel subsidies,[57][58]concerns aboutoil dependence, tighteningenvironmental lawsand restrictions ongreenhouse gas emissionsare propelling work on alternative power systems for cars. This includeshybrid vehicles,plug-in electric vehiclesandhydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 millionelectric carson the world's roads.[59]Despite rapid growth, less than two per cent of cars on the world's roads werefully electricandplug-in hybridcars by the end of 2021.[59]Cars for racing orspeed recordshave sometimes employedjetorrocketengines, but these are impractical for common use.Oil consumptionhas increased rapidly in the 20th and 21st centuries because there are more cars; the1980s oil gluteven fuelled the sales of low-economy vehicles inOECDcountries. TheBRICcountries are adding to this consumption.[citation needed]
In almost all hybrid (evenmild hybrid) and pure electric carsregenerative brakingrecovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot.[60]Although all cars must have friction brakes (frontdisc brakesand either disc ordrum rear brakes[61]) for emergency stops, regenerative braking improves efficiency, particularly in city driving.[62]
Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include asteering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardised, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, theelectric carand the integration of mobile communications.
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch,ignition timing, and a crank instead of an electricstarter. However, new controls have also been added to vehicles, making them more complex. These includeair conditioning,navigation systems, andin-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such asBMW'siDriveandFord'sMyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls.
Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, anentertainment systemwhich originated fromcar radios, sidewayswindowswhich can be lowered or raised electrically (manually on earlier cars), and one or multipleauxiliary power outletsfor supplying portable appliances such asmobile phones, portable fridges,power inverters, and electrical air pumps from the on-board electrical system.[63][64][a]More costly upper-class andluxury carsare equipped with features earlier such as massage seats andcollision avoidance systems.[65][66]
Dedicated automotive fuses and circuit breakersprevent damage fromelectrical overload.
Cars are typically fitted with multiple types of lights. These includeheadlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions,daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light.
During the late 20th and early 21st century, cars increased in weight due to batteries,[68]modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines"[69]and, as of 2019[update], typically weigh between 1 and 3 tonnes (1.1 and 3.3 short tons; 0.98 and 2.95 long tons).[70]Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users.[69]The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. TheWuling Hongguang Mini EV, a typicalcity car, weighs about 700 kilograms (1,500 lb). Heavier cars include SUVs and extended-length SUVs like theSuburban. Cars have also become wider.[71]
Some places tax heavier cars more:[72]as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycledaluminiuminstead of steel.[73]It has been suggested that one benefit of subsidisingcharging infrastructureis that cars can use lighter batteries.[74]
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear.Full-size carsand largesport utility vehiclescan often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand,sports carsare most often designed with only two seats. Utility vehicles likepickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, thesedan/saloon,hatchback,station wagon/estate,coupe, andminivan.
Traffic collisions are the largest cause of injury-related deaths worldwide.[17]Mary Wardbecame one of the first documented car fatalities in 1869 inParsonstown, Ireland,[75]andHenry Blissone of the US's first pedestrian car casualties in 1899 in New York City.[76]There are now standard tests for safety in new cars, such as theEuroandUSNCAP tests,[77]and insurance-industry-backed tests by theInsurance Institute for Highway Safety(IIHS).[78]However, not all such tests consider the safety of people outside the car, such as drivers of other cars, pedestrians and cyclists.[79]
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs andauto maintenance, fuel,depreciation, driving time,parking fees, taxes, and insurance,[16]are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience,[18]andemergency power.[81]During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."[82]
Similarly the costs to society of car use may include;maintaining roads,land use,air pollution,noise pollution,road congestion,public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from thetaxopportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.[19]
Car production and use has a large number of environmental impacts: it causes localair pollutionplastic pollutionand contributes togreenhouse gas emissionsandclimate change.[85]Cars and vans caused 10% of energy-relatedcarbon dioxideemissions in 2022.[86]As of 2023[update],electric carsproduce about half the emissions over their lifetime as diesel and petrol cars. This is set to improve as countries produce more of their electricity fromlow-carbon sources.[87]Cars consume almost a quarter of world oil production as of 2019.[55]Cities planned around cars are often less dense, which leads to further emissions, as they are lesswalkablefor instance.[85]A growing demand for large SUVs is driving up emissions from cars.[88]
Cars are a major cause ofair pollution,[89]which stems fromexhaust gasin diesel and petrol cars and fromdust from brakes, tyres, and road wear. Electric cars do not produce tailpipe emissions, but are generally heavier and therefore produce slightly moreparticulate matter.[90]Heavy metalsand microplastics (from tyres) are also released into the environment, during production, use and at the end of life. Mining related to car manufacturing and oil spills both causewater pollution.[85]
Animals and plants are often negatively affected by cars viahabitat destructionandfragmentationfrom the road network and pollution. Animals are also killed every year on roads by cars, referred to asroadkill.[85]More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allowwildlife crossings) and creatingwildlife corridors.
Governments use fiscal policies, such asroad tax, to discourage the purchase and use of more polluting cars;[91]Vehicle emission standardsban the sale of new highly pollution cars.[92]Many countriesplan to stop selling fossil cars altogetherbetween 2025 and 2050.[93]Various cities have implementedlow-emission zones, banning old fossil fuel andAmsterdamis planning to ban fossil fuel cars completely.[94][95]Some cities make it easier for people to choose other forms of transport, such ascycling.[94]Many Chinese cities limit licensing of fossil fuel cars,[96]
Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas.[citation needed]Growth in the popularity of cars andcommutinghas led totraffic congestion.[97]Moscow,Istanbul,Bogotá,Mexico CityandSão Paulowere the world's most congested cities in 2018 according to INRIX, a data analytics company.[98]
In the United States, thetransport divideandcar dependencyresulting from domination ofcar-based transport systemspresents barriers to employment in low-income neighbourhoods,[99]with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income.[100]Dependency on automobiles byAfrican Americansmay result in exposure to the hazards ofdriving while blackand other types ofracial discriminationrelated to buying, financing and insuring them.[101]
Air pollution from cars increases the risk oflung cancerandheart disease. It can also harm pregnancies: more children areborn too earlyor with lowerbirth weight.[85]Children are extra vulnerable to air pollution, as their bodies are still developing and air pollution in children is linked to the development ofasthma,childhood cancer, and neurocognitive issues such asautism.[102][85]The growth in popularity of the car allowed cities tosprawl, therefore encouraging more travel by car, resulting in inactivity andobesity, which in turn can lead to increased risk of a variety of diseases.[103]When places are designed around cars, children have fewer opportunities to go places by themselves, and lose opportunities to become more independent.[104][85]
Although intensive development of conventionalbattery electric vehiclesis continuing into the 2020s,[105]other carpropulsiontechnologies that are under development includewireless charging,[106]hydrogen cars,[107][108]and hydrogen/electric hybrids.[109]Research into alternative forms of power includes usingammoniainstead of hydrogen infuel cells.[110]
New materials which may replace steel car bodies include aluminium,[111]fiberglass,carbon fiber,biocomposites, andcarbon nanotubes.[112]Telematicstechnology is allowing more and more people to share cars, on apay-as-you-gobasis, throughcar shareandcarpoolschemes. Communication is also evolving due toconnected carsystems.[113]Open-source carsare not widespread.[114]
Fully autonomous vehicles, also known as driverless cars, already exist asrobotaxis[115][116]but have a long way to go before they are in general use.[117]
Car-sharearrangements andcarpoolingare also increasingly popular, in the US and Europe.[118]For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.[119]
The automotive industry designs, develops, manufactures, markets, and sells the world'smotor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide,[120]down from 67 million the previous year.[121]Theautomotive industry in Chinaproduces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India.[122]The largest market is China, followed by the US.
Around the world, there are about a billion cars on the road;[123]they burn over a trillion litres (0.26×10^12US gal; 0.22×10^12imp gal) of petrol and diesel fuel yearly, consuming about 50exajoules(14,000TWh) of energy.[124]The numbers of cars are increasing rapidly in China and India.[125]In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars.[126][127]Thesustainable transportmovement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage. In July 2021, theEuropean Commissionintroduced the "Fit for 55" legislation package, outlining crucial directives for the automotive sector's future.[128][129]According to this package, by 2035, all newly sold cars in the European market must beZero-emissions vehicles.[130][131][132]
Established alternatives for some aspects of car use includepublic transportsuch as busses,trolleybusses, trains,subways,tramways,light rail, cycling, andwalking.Bicycle sharing systemshave been established in China and many European cities, includingCopenhagenandAmsterdam. Similar programmes have been developed in large US cities.[133][134]Additional individual modes of transport, such aspersonal rapid transitcould serve as an alternative to cars if they prove to be socially accepted.[135]A study which checked the costs and the benefits of introducingLow Traffic NeighbourhoodinLondonfound the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time.[136]
General:
Effects:
Mitigation:
|
https://en.wikipedia.org/wiki/Mass_automobility
|
Afinite differenceis a mathematical expression of the formf(x+b) −f(x+a). Finite differences (or the associateddifference quotients) are often used as approximations of derivatives, such as innumerical differentiation.
Thedifference operator, commonly denotedΔ{\displaystyle \Delta }, is theoperatorthat maps a functionfto the functionΔ[f]{\displaystyle \Delta [f]}defined byΔ[f](x)=f(x+1)−f(x).{\displaystyle \Delta [f](x)=f(x+1)-f(x).}Adifference equationis afunctional equationthat involves the finite difference operator in the same way as adifferential equationinvolvesderivatives. There are many similarities between difference equations and differential equations. Certainrecurrence relationscan be written as difference equations by replacing iteration notation with finite differences.
Innumerical analysis, finite differences are widely used forapproximating derivatives, and the term "finite difference" is often used as an abbreviation of "finite difference approximation of derivatives".[1][2][3]
Finite differences were introduced byBrook Taylorin 1715 and have also been studied as abstract self-standing mathematical objects in works byGeorge Boole(1860),L. M. Milne-Thomson(1933), andKároly Jordan[de](1939). Finite differences trace their origins back to one ofJost Bürgi's algorithms (c.1592) and work by others includingIsaac Newton. The formal calculus of finite differences can be viewed as an alternative to thecalculusofinfinitesimals.[4]
Three basic types are commonly considered:forward,backward, andcentralfinite differences.[1][2][3]
Aforward difference, denotedΔh[f],{\displaystyle \Delta _{h}[f],}of afunctionfis a function defined asΔh[f](x)=f(x+h)−f(x).{\displaystyle \Delta _{h}[f](x)=f(x+h)-f(x).}
Depending on the application, the spacinghmay be variable or constant. When omitted,his taken to be 1; that is,Δ[f](x)=Δ1[f](x)=f(x+1)−f(x).{\displaystyle \Delta [f](x)=\Delta _{1}[f](x)=f(x+1)-f(x).}
Abackward differenceuses the function values atxandx−h, instead of the values atx+handx:∇h[f](x)=f(x)−f(x−h)=Δh[f](x−h).{\displaystyle \nabla _{h}[f](x)=f(x)-f(x-h)=\Delta _{h}[f](x-h).}
Finally, thecentral differenceis given byδh[f](x)=f(x+h2)−f(x−h2)=Δh/2[f](x)+∇h/2[f](x).{\displaystyle \delta _{h}[f](x)=f(x+{\tfrac {h}{2}})-f(x-{\tfrac {h}{2}})=\Delta _{h/2}[f](x)+\nabla _{h/2}[f](x).}
The approximation ofderivativesby finite differences plays a central role infinite difference methodsfor thenumericalsolution ofdifferential equations, especiallyboundary value problems.
Thederivativeof a functionfat a pointxis defined by thelimitf′(x)=limh→0f(x+h)−f(x)h.{\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.}
Ifhhas a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be writtenf(x+h)−f(x)h=Δh[f](x)h.{\displaystyle {\frac {f(x+h)-f(x)}{h}}={\frac {\Delta _{h}[f](x)}{h}}.}
Hence, the forward difference divided byhapproximates the derivative whenhis small. The error in this approximation can be derived fromTaylor's theorem. Assuming thatfis twice differentiable, we haveΔh[f](x)h−f′(x)=o(h)→0ash→0.{\displaystyle {\frac {\Delta _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
The same formula holds for the backward difference:∇h[f](x)h−f′(x)=o(h)→0ash→0.{\displaystyle {\frac {\nabla _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.}
However, the central (also called centered) difference yields a more accurate approximation. Iffis three times differentiable,δh[f](x)h−f′(x)=o(h2).{\displaystyle {\frac {\delta _{h}[f](x)}{h}}-f'(x)=o\left(h^{2}\right).}
The main problem[citation needed]with the central difference method, however, is that oscillating functions can yield zero derivative. Iff(nh) = 1fornodd, andf(nh) = 2forneven, thenf′(nh) = 0if it is calculated with thecentral difference scheme. This is particularly troublesome if the domain offis discrete. See alsoSymmetric derivative.
Authors for whom finite differences mean finite difference approximations define the forward/backward/central differences as the quotients given in this section (instead of employing the definitions given in the previous section).[1][2][3]
In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula forf′(x+h/2)andf′(x−h/2)and applying a central difference formula for the derivative off′atx, we obtain the central difference approximation of the second derivative off:
Similarly we can apply other differencing formulas in a recursive manner.
More generally, then-th order forward, backward, and centraldifferences are given by, respectively,
These equations usebinomial coefficientsafter the summation sign shown as(ni). Each row ofPascal's triangleprovides the coefficient for each value ofi.
Note that the central difference will, for oddn, havehmultiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied substituting the average ofδn[f](x−h2){\displaystyle \ \delta ^{n}[f](\ x-{\tfrac {\ h\ }{2}}\ )\ }andδn[f](x+h2).{\displaystyle \ \delta ^{n}[f](\ x+{\tfrac {\ h\ }{2}}\ )~.}
Forward differences applied to asequenceare sometimes called thebinomial transformof the sequence, and have a number of interesting combinatorial properties. Forward differences may be evaluated using theNörlund–Rice integral. The integral representation for these types of series is interesting, because the integral can often be evaluated usingasymptotic expansionorsaddle-pointtechniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for largen.
The relationship of these higher-order differences with the respective derivatives is straightforward,dnfdxn(x)=Δhn[f](x)hn+o(h)=∇hn[f](x)hn+o(h)=δhn[f](x)hn+o(h2).{\displaystyle {\frac {d^{n}f}{dx^{n}}}(x)={\frac {\Delta _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\nabla _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\delta _{h}^{n}[f](x)}{h^{n}}}+o\left(h^{2}\right).}
Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of orderh. However, the combinationΔh[f](x)−12Δh2[f](x)h=−f(x+2h)−4f(x+h)+3f(x)2h{\displaystyle {\frac {\Delta _{h}[f](x)-{\frac {1}{2}}\Delta _{h}^{2}[f](x)}{h}}=-{\frac {f(x+2h)-4f(x+h)+3f(x)}{2h}}}approximatesf′(x)up to a term of orderh2. This can be proven by expanding the above expression inTaylor series, or by using the calculus of finite differences, explained below.
If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences.
For a givenpolynomialof degreen≥ 1, expressed in the functionP(x), with real numbersa≠ 0andbandlower order terms(if any) marked asl.o.t.:P(x)=axn+bxn−1+l.o.t.{\displaystyle P(x)=ax^{n}+bx^{n-1}+l.o.t.}
Afternpairwise differences, the following result can be achieved, whereh≠ 0is areal numbermarking the arithmetic difference:[5]Δhn[P](x)=ahnn!{\displaystyle \Delta _{h}^{n}[P](x)=ah^{n}n!}
Only the coefficient of the highest-order term remains. As this result is constant with respect tox, any further pairwise differences will have the value0.
LetQ(x)be a polynomial of degree1:Δh[Q](x)=Q(x+h)−Q(x)=[a(x+h)+b]−[ax+b]=ah=ah11!{\displaystyle \Delta _{h}[Q](x)=Q(x+h)-Q(x)=[a(x+h)+b]-[ax+b]=ah=ah^{1}1!}
This proves it for the base case.
LetR(x)be a polynomial of degreem− 1wherem≥ 2and the coefficient of the highest-order term bea≠ 0. Assuming the following holds true for all polynomials of degreem− 1:Δhm−1[R](x)=ahm−1(m−1)!{\displaystyle \Delta _{h}^{m-1}[R](x)=ah^{m-1}(m-1)!}
LetS(x)be a polynomial of degreem. With one pairwise difference:Δh[S](x)=[a(x+h)m+b(x+h)m−1+l.o.t.]−[axm+bxm−1+l.o.t.]=ahmxm−1+l.o.t.=T(x){\displaystyle \Delta _{h}[S](x)=[a(x+h)^{m}+b(x+h)^{m-1}+{\text{l.o.t.}}]-[ax^{m}+bx^{m-1}+{\text{l.o.t.}}]=ahmx^{m-1}+{\text{l.o.t.}}=T(x)}
Asahm≠ 0, this results in a polynomialT(x)of degreem− 1, withahmas the coefficient of the highest-order term. Given the assumption above andm− 1pairwise differences (resulting in a total ofmpairwise differences forS(x)), it can be found that:Δhm−1[T](x)=ahm⋅hm−1(m−1)!=ahmm!{\displaystyle \Delta _{h}^{m-1}[T](x)=ahm\cdot h^{m-1}(m-1)!=ah^{m}m!}
This completes the proof.
This identity can be used to find the lowest-degree polynomial that intercepts a number of points(x,y)where the difference on thex-axis from one point to the next is a constanth≠ 0. For example, given the following points:
We can use a differences table, where for all cells to the right of the firsty, the following relation to the cells in the column immediately to the left exists for a cell(a+ 1,b+ 1), with the top-leftmost cell being at coordinate(0, 0):(a+1,b+1)=(a,b+1)−(a,b){\displaystyle (a+1,b+1)=(a,b+1)-(a,b)}
To find the first term, the following table can be used:
This arrives at a constant648. The arithmetic difference ish= 3, as established above. Given the number of pairwise differences needed to reach the constant, it can be surmised this is a polynomial of degree3. Thus, using the identity above:648=a⋅33⋅3!=a⋅27⋅6=a⋅162{\displaystyle 648=a\cdot 3^{3}\cdot 3!=a\cdot 27\cdot 6=a\cdot 162}
Solving fora, it can be found to have the value4. Thus, the first term of the polynomial is4x3.
Then, subtracting out the first term, which lowers the polynomial's degree, and finding the finite difference again:
Here, the constant is achieved after only two pairwise differences, thus the following result:−306=a⋅32⋅2!=a⋅18{\displaystyle -306=a\cdot 3^{2}\cdot 2!=a\cdot 18}
Solving fora, which is−17, the polynomial's second term is−17x2.
Moving on to the next term, by subtracting out the second term:
Thus the constant is achieved after only one pairwise difference:108=a⋅31⋅1!=a⋅3{\displaystyle 108=a\cdot 3^{1}\cdot 1!=a\cdot 3}
It can be found thata= 36and thus the third term of the polynomial is36x. Subtracting out the third term:
Without any pairwise differences, it is found that the 4th and final term of the polynomial is the constant−19. Thus, the lowest-degree polynomial intercepting all the points in the first table is found:4x3−17x2+36x−19{\displaystyle 4x^{3}-17x^{2}+36x-19}
Usinglinear algebraone can construct finite difference approximations which utilize an arbitrary number of points to the left and a (possibly different) number of points to the right of the evaluation point, for any order derivative. This involves solving a linear system such that theTaylor expansionof the sum of those points around the evaluation point best approximates the Taylor expansion of the desired derivative. Such formulas can be represented graphically on a hexagonal or diamond-shaped grid.[6]This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side.[7]Finite difference approximations for non-standard (and even non-integer) stencils given an arbitrary stencil and a desired derivative order may be constructed.[8]
An important application of finite differences is innumerical analysis, especially innumerical differential equations, which aim at the numerical solution ofordinaryandpartial differential equations. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are calledfinite difference methods.
Common applications of the finite difference method are in computational science and engineering disciplines, such asthermal engineering,fluid mechanics, etc.
TheNewton seriesconsists of the terms of theNewton forward difference equation, named afterIsaac Newton; in essence, it is theGregory–Newton interpolation formula[9](named afterIsaac NewtonandJames Gregory), first published in hisPrincipia Mathematicain 1687,[10][11]namely the discrete analog of the continuous Taylor expansion,
f(x)=∑k=0∞Δk[f](a)k!(x−a)k=∑k=0∞(x−ak)Δk[f](a),{\displaystyle f(x)=\sum _{k=0}^{\infty }{\frac {\Delta ^{k}[f](a)}{k!}}\,(x-a)_{k}=\sum _{k=0}^{\infty }{\binom {x-a}{k}}\,\Delta ^{k}[f](a),}
which holds for anypolynomialfunctionfand for many (but not all)analytic functions. (It does not hold whenfisexponential typeπ{\displaystyle \pi }. This is easily seen, as the sine function vanishes at integer multiples ofπ{\displaystyle \pi }; the corresponding Newton series is identically zero, as all finite differences are zero in this case. Yet clearly, the sine function is not zero.) Here, the expression(xk)=(x)kk!{\displaystyle {\binom {x}{k}}={\frac {(x)_{k}}{k!}}}is thebinomial coefficient, and(x)k=x(x−1)(x−2)⋯(x−k+1){\displaystyle (x)_{k}=x(x-1)(x-2)\cdots (x-k+1)}is the "falling factorial" or "lower factorial", while theempty product(x)0is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values ofx,h= 1of the generalization below.
Note the formal correspondence of this result toTaylor's theorem. Historically, this, as well as theChu–Vandermonde identity,(x+y)n=∑k=0n(nk)(x)n−k(y)k,{\displaystyle (x+y)_{n}=\sum _{k=0}^{n}{\binom {n}{k}}(x)_{n-k}\,(y)_{k},}(following from it, and corresponding to thebinomial theorem), are included in the observations that matured to the system ofumbral calculus.
Newton series expansions can be superior to Taylor series expansions when applied to discrete quantities like quantum spins (seeHolstein–Primakoff transformation),bosonic operator functionsor discrete counting statistics.[12]
To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling theFibonacci sequencef= 2, 2, 4, ...One can find apolynomialthat reproduces these values, by first computing a difference table, and then substituting the differences that correspond tox0(underlined) into the formula as follows,xf=Δ0Δ1Δ212_0_222_234f(x)=Δ0⋅1+Δ1⋅(x−x0)11!+Δ2⋅(x−x0)22!(x0=1)=2⋅1+0⋅x−11+2⋅(x−1)(x−2)2=2+(x−1)(x−2){\displaystyle {\begin{matrix}{\begin{array}{|c||c|c|c|}\hline x&f=\Delta ^{0}&\Delta ^{1}&\Delta ^{2}\\\hline 1&{\underline {2}}&&\\&&{\underline {0}}&\\2&2&&{\underline {2}}\\&&2&\\3&4&&\\\hline \end{array}}&\quad {\begin{aligned}f(x)&=\Delta ^{0}\cdot 1+\Delta ^{1}\cdot {\dfrac {(x-x_{0})_{1}}{1!}}+\Delta ^{2}\cdot {\dfrac {(x-x_{0})_{2}}{2!}}\quad (x_{0}=1)\\\\&=2\cdot 1+0\cdot {\dfrac {x-1}{1}}+2\cdot {\dfrac {(x-1)(x-2)}{2}}\\\\&=2+(x-1)(x-2)\\\end{aligned}}\end{matrix}}}
For the case of nonuniform steps in the values ofx, Newton computes thedivided differences,Δj,0=yj,Δj,k=Δj+1,k−1−Δj,k−1xj+k−xj∋{k>0,j≤max(j)−k},Δ0k=Δ0,k{\displaystyle \Delta _{j,0}=y_{j},\qquad \Delta _{j,k}={\frac {\Delta _{j+1,k-1}-\Delta _{j,k-1}}{x_{j+k}-x_{j}}}\quad \ni \quad \left\{k>0,\;j\leq \max \left(j\right)-k\right\},\qquad \Delta 0_{k}=\Delta _{0,k}}the series of products,P0=1,Pk+1=Pk⋅(ξ−xk),{\displaystyle {P_{0}}=1,\quad \quad P_{k+1}=P_{k}\cdot \left(\xi -x_{k}\right),}and the resulting polynomial is thescalar product,[13]f(ξ)=Δ0⋅P(ξ).{\displaystyle f(\xi )=\Delta 0\cdot P\left(\xi \right).}
In analysis withp-adic numbers,Mahler's theoremstates that the assumption thatfis a polynomial function can be weakened all the way to the assumption thatfis merely continuous.
Carlson's theoremprovides necessary and sufficient conditions for a Newton series to be unique, if it exists. However, a Newton series does not, in general, exist.
The Newton series, together with theStirling seriesand theSelberg series, is a special case of the generaldifference series, all of which are defined in terms of suitably scaled forward differences.
In a compressed and slightly more general form and equidistant nodes the formula readsf(x)=∑k=0(x−ahk)∑j=0k(−1)k−j(kj)f(a+jh).{\displaystyle f(x)=\sum _{k=0}{\binom {\frac {x-a}{h}}{k}}\sum _{j=0}^{k}(-1)^{k-j}{\binom {k}{j}}f(a+jh).}
The forward difference can be considered as anoperator, called thedifference operator, which maps the functionftoΔh[f].[14][15]This operator amounts toΔh=Th−I,{\displaystyle \Delta _{h}=\operatorname {T} _{h}-\operatorname {I} \ ,}whereThis theshift operatorwith steph, defined byTh[f](x) =f(x+h),andIis theidentity operator.
The finite difference of higher orders can be defined in recursive manner asΔnh≡ Δh(Δn− 1h).Another equivalent definition isΔnh≡ [Th− I]n.
The difference operatorΔhis alinear operator, as such it satisfiesΔh[α f+β g](x) =αΔh[f](x) +βΔh[g](x).
It also satisfies a specialLeibniz rule:
Similar Leibniz rules hold for the backward and central differences.
Formally applying theTaylor serieswith respect toh, yields the operator equationΔh=hD+12!h2D2+13!h3D3+⋯=ehD−I,{\displaystyle \operatorname {\Delta } _{h}=h\operatorname {D} +{\frac {1}{2!}}h^{2}\operatorname {D} ^{2}+{\frac {1}{3!}}h^{3}\operatorname {D} ^{3}+\cdots =e^{h\operatorname {D} }-\operatorname {I} \ ,}whereDdenotes the conventional, continuous derivative operator, mappingfto its derivativef′.The expansion is valid when both sides act onanalytic functions, for sufficiently smallh; in the special case that the series of derivatives terminates (when the function operated on is a finitepolynomial) the expression is exact, forallfinite stepsizes,h.ThusTh=ehD,and formally inverting the exponential yieldshD=ln(1+Δh)=Δh−12Δh2+13Δh3−⋯.{\displaystyle h\operatorname {D} =\ln(1+\Delta _{h})=\Delta _{h}-{\tfrac {1}{2}}\,\Delta _{h}^{2}+{\tfrac {1}{3}}\,\Delta _{h}^{3}-\cdots ~.}This formula holds in the sense that both operators give the same result when applied to a polynomial.
Even for analytic functions, the series on the right is not guaranteed to converge; it may be anasymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation tof′(x)mentioned at the end of the section§ Higher-order differences.
The analogous formulas for the backward and central difference operators arehD=−ln(1−∇h)andhD=2arsinh(12δh).{\displaystyle h\operatorname {D} =-\ln(1-\nabla _{h})\quad {\text{ and }}\quad h\operatorname {D} =2\operatorname {arsinh} \left({\tfrac {1}{2}}\,\delta _{h}\right)~.}
The calculus of finite differences is related to theumbral calculusof combinatorics. This remarkably systematic correspondence is due to the identity of thecommutatorsof the umbral quantities to their continuum analogs (h→ 0limits),
[Δhh,xTh−1]=[D,x]=I.{\displaystyle \left[{\frac {\Delta _{h}}{h}},x\,\operatorname {T} _{h}^{-1}\right]=[\operatorname {D} ,x]=I.}
A large number of formal differential relations of standard calculus involving
functionsf(x)thussystematically map to umbral finite-difference analogsinvolvingf(xT−1h).
For instance, the umbral analog of a monomialxnis a generalization of the above falling factorial (Pochhammer k-symbol),(x)n≡(xTh−1)n=x(x−h)(x−2h)⋯(x−(n−1)h),{\displaystyle \ (x)_{n}\equiv \left(\ x\ \operatorname {T} _{h}^{-1}\right)^{n}=x\left(x-h\right)\left(x-2h\right)\cdots {\bigl (}x-\left(n-1\right)\ h{\bigr )}\ ,}so thatΔhh(x)n=n(x)n−1,{\displaystyle \ {\frac {\Delta _{h}}{h}}(x)_{n}=n\ (x)_{n-1}\ ,}hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary functionf(x)in such symbols), and so on.
For example, the umbral sine issin(xTh−1)=x−(x)33!+(x)55!−(x)77!+⋯{\displaystyle \ \sin \left(x\ \operatorname {T} _{h}^{-1}\right)=x-{\frac {(x)_{3}}{3!}}+{\frac {(x)_{5}}{5!}}-{\frac {(x)_{7}}{7!}}+\cdots \ }
As in thecontinuum limit, theeigenfunctionofΔh/halso happens to be an exponential,
and henceFourier sums of continuum functions are readily, faithfully mapped to umbral Fourier sums, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials.[16]This umbral exponential thus amounts to the exponentialgenerating functionof thePochhammer symbols.
Thus, for instance, theDirac delta functionmaps to its umbral correspondent, thecardinal sine functionδ(x)↦sin[π2(1+xh)]π(x+h),{\displaystyle \ \delta (x)\mapsto {\frac {\sin \left[{\frac {\pi }{2}}\left(1+{\frac {x}{h}}\right)\right]}{\pi (x+h)}}\ ,}and so forth.[17]Difference equationscan often be solved with techniques very similar to those for solvingdifferential equations.
The inverse operator of the forward difference operator, so then the umbral integral, is theindefinite sumor antidifference operator.
Analogous torules for finding the derivative, we have:
All of the above rules apply equally well to any difference operator as toΔ, includingδand∇.
See references.[18][19][20][21]
Finite differences can be considered in more than one variable. They are analogous topartial derivativesin several variables.
Some partial derivative approximations are:fx(x,y)≈f(x+h,y)−f(x−h,y)2hfy(x,y)≈f(x,y+k)−f(x,y−k)2kfxx(x,y)≈f(x+h,y)−2f(x,y)+f(x−h,y)h2fyy(x,y)≈f(x,y+k)−2f(x,y)+f(x,y−k)k2fxy(x,y)≈f(x+h,y+k)−f(x+h,y−k)−f(x−h,y+k)+f(x−h,y−k)4hk.{\displaystyle {\begin{aligned}f_{x}(x,y)&\approx {\frac {f(x+h,y)-f(x-h,y)}{2h}}\\f_{y}(x,y)&\approx {\frac {f(x,y+k)-f(x,y-k)}{2k}}\\f_{xx}(x,y)&\approx {\frac {f(x+h,y)-2f(x,y)+f(x-h,y)}{h^{2}}}\\f_{yy}(x,y)&\approx {\frac {f(x,y+k)-2f(x,y)+f(x,y-k)}{k^{2}}}\\f_{xy}(x,y)&\approx {\frac {f(x+h,y+k)-f(x+h,y-k)-f(x-h,y+k)+f(x-h,y-k)}{4hk}}.\end{aligned}}}
Alternatively, for applications in which the computation offis the most costly step, and both first and second derivatives must be computed, a more efficient formula for the last case isfxy(x,y)≈f(x+h,y+k)−f(x+h,y)−f(x,y+k)+2f(x,y)−f(x−h,y)−f(x,y−k)+f(x−h,y−k)2hk,{\displaystyle f_{xy}(x,y)\approx {\frac {f(x+h,y+k)-f(x+h,y)-f(x,y+k)+2f(x,y)-f(x-h,y)-f(x,y-k)+f(x-h,y-k)}{2hk}},}since the only values to compute that are not already needed for the previous four equations aref(x+h,y+k)andf(x−h,y−k).
|
https://en.wikipedia.org/wiki/Newton_series
|
Incomputer science, alogical shiftis abitwise operationthat shifts all the bits of its operand. The two base variants are thelogical left shiftand thelogical right shift. This is further modulated by the number of bit positions a given value shall be shifted, such asshift left by 1orshift right by n. Unlike anarithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number'sexponentfrom itssignificand(mantissa); every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled, usually with zeros, and possibly ones (contrast with acircular shift).
A logical shift is often used when its operand is being treated as asequenceof bits instead of as a number.
Logical shifts can be useful as efficient ways to perform multiplication or division of unsignedintegersby powers of two. Shifting left bynbits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right bynbits on anunsignedbinary number has the effect of dividing it by 2n(rounding towards 0).
Logical right shift differs from arithmetic right shift. Thus, many languages have differentoperatorsfor them. For example, inJavaandJavaScript, the logical right shift operator is>>>, but the arithmetic right shift operator is>>. (Java has only one left shift operator (<<), because left shift via logic and arithmetic have the same effect.)
Theprogramming languagesC,C++, andGo, however, have only one right shift operator,>>. Most C and C++ implementations, and Go, choose which right shift to perform depending on the type of integer being shifted: signed integers are shifted using the arithmetic shift, and unsigned integers are shifted using the logical shift. In particular, C++ uses its logical shift operators as part of the syntax of its input and output functions, called "cin" and "cout" respectively.
All currently relevant C standards (ISO/IEC 9899:1999 to 2011) leave a definition gap for cases where the number of shifts is equal to or bigger than the number of bits in the operands in a way that the result is undefined. This helps allow C compilers to emit efficient code for various platforms by allowing direct use of the native shift instructions which have differing behavior. For example, shift-left-word inPowerPCchooses the more-intuitive behavior where shifting by the bit width or above gives zero,[6]whereas SHL inx86chooses to mask the shift amount to the lower bitsto reduce the maximum execution time of the instructions, and as such a shift by the bit width doesn't change the value.[7]
Some languages, such as the.NET FrameworkandLLVM, also leave shifting by the bit width and aboveunspecified(.NET)[8]orundefined(LLVM).[9]Others choose to specify the behavior of their most common target platforms, such asC#which specifies the x86 behavior.[10]
If the bit sequence 0001 0111 (decimal 23) is logically shifted by one bit position, then:
Note: MSB = Most Significant Bit,
LSB = Least Significant Bit
|
https://en.wikipedia.org/wiki/Logical_shift_left
|
Description logics(DL) are a family of formalknowledge representationlanguages. Many DLs are more expressive thanpropositional logicbut less expressive thanfirst-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually)decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy description logics, and each description logic features a different balance betweenexpressive powerandreasoningcomplexityby supporting different sets of mathematical constructors.[1]
DLs are used inartificial intelligenceto describe and reason about the relevant concepts of an application domain (known asterminological knowledge). It is of particular importance in providing a logical formalism forontologiesand theSemantic Web: theWeb Ontology Language(OWL) and its profiles are based on DLs. The most notable application of DLs and OWL is inbiomedical informaticswhere DL assists in the codification of biomedical knowledge.[citation needed]
A description logic (DL) modelsconcepts,rolesandindividuals, and their relationships.
The fundamental modeling concept of a DL is theaxiom—a logical statement relating roles and/or concepts.[2]This is a key difference from theframesparadigm where aframe specificationdeclares and completely defines a class.[2]
The description logic community uses different terminology than thefirst-order logic(FOL) community for operationally equivalent notions; some examples are given below. TheWeb Ontology Language(OWL) uses again a different terminology, also given in the table below.
There are many varieties of description logics and there is an informal naming convention, roughly describing the operators allowed. Theexpressivityis encoded in the label for a logic starting with one of the following basic logics:
Followed by any of the following extensions:
Some canonical DLs that do not exactly fit this convention are:
As an example,ALC{\displaystyle {\mathcal {ALC}}}is a centrally important description logic from which comparisons with other varieties can be made.ALC{\displaystyle {\mathcal {ALC}}}is simplyAL{\displaystyle {\mathcal {AL}}}with complement of any concept allowed, not just atomic concepts.ALC{\displaystyle {\mathcal {ALC}}}is used instead of the equivalentALUE{\displaystyle {\mathcal {ALUE}}}.
A further example, the description logicSHIQ{\displaystyle {\mathcal {SHIQ}}}is the logicALC{\displaystyle {\mathcal {ALC}}}plus extended cardinality restrictions, and transitive and inverse roles. The naming conventions aren't purely systematic so that the logicALCOIN{\displaystyle {\mathcal {ALCOIN}}}might be referred to asALCNIO{\displaystyle {\mathcal {ALCNIO}}}and other abbreviations are also made where possible.
The Protégé ontology editor supportsSHOIN(D){\displaystyle {\mathcal {SHOIN}}^{\mathcal {(D)}}}. Three major biomedical informatics terminology bases,SNOMED CT, GALEN, and GO, are expressible inEL{\displaystyle {\mathcal {EL}}}(with additional role properties).
OWL 2 provides the expressiveness ofSROIQ(D){\displaystyle {\mathcal {SROIQ}}^{\mathcal {(D)}}}, OWL-DL is based onSHOIN(D){\displaystyle {\mathcal {SHOIN}}^{\mathcal {(D)}}}, and for OWL-Lite it isSHIF(D){\displaystyle {\mathcal {SHIF}}^{\mathcal {(D)}}}.
Description logic was given its current name in the 1980s. Previous to this it was called (chronologically):terminological systems, andconcept languages.
Framesandsemantic networkslack formal (logic-based) semantics.[5]DL was first introduced intoknowledge representation(KR) systems to overcome this deficiency.[5]
The first DL-based KR system wasKL-ONE(byRonald J. Brachmanand Schmolze, 1985). During the '80s other DL-based systems usingstructural subsumption algorithms[5]were developed including KRYPTON (1983),LOOM(1987), BACK (1988), K-REP (1991) and CLASSIC (1991). This approach featured DL with limited expressiveness but relatively efficient (polynomial time) reasoning.[5]
In the early '90s, the introduction of a newtableau based algorithmparadigm allowed efficient reasoning on more expressive DL.[5]DL-based systems using these algorithms — such as KRIS (1991) — show acceptable reasoning performance on typical inference problems even though the worst case complexity is no longer polynomial.[5]
From the mid '90s, reasoners were created with good practical performance on very expressive DL with high worst case complexity.[5]Examples from this period include FaCT,[6]RACER(2001), CEL (2005), andKAON 2(2005).
DL reasoners, such as FaCT, FaCT++,[6]RACER, DLP and Pellet,[7]implement themethod of analytic tableaux. KAON2 is implemented by algorithms which reduce a SHIQ(D) knowledge base to a disjunctivedatalogprogram.
TheDARPA Agent Markup Language(DAML) andOntology Inference Layer(OIL)ontology languagesfor theSemantic Webcan be viewed assyntacticvariants of DL.[8]In particular, the formal semantics and reasoning in OIL use theSHIQ{\displaystyle {\mathcal {SHIQ}}}DL.[9]TheDAML+OILDL was developed as a submission to[10]—and formed the starting point of—theWorld Wide Web Consortium(W3C) Web Ontology Working Group.[11]In 2004, the Web Ontology Working Group completed its work by issuing theOWL[12]recommendation. The design of OWL is based on theSH{\displaystyle {\mathcal {SH}}}family of DL[13]with OWL DL and OWL Lite based onSHOIN(D){\displaystyle {\mathcal {SHOIN}}^{\mathcal {(D)}}}andSHIF(D){\displaystyle {\mathcal {SHIF}}^{\mathcal {(D)}}}respectively.[13]
The W3C OWL Working Group began work in 2007 on a refinement of - and extension to - OWL.[14]In 2009, this was completed by the issuance of theOWL2recommendation.[15]OWL2 is based on the description logicSROIQ(D){\displaystyle {\mathcal {SROIQ}}^{\mathcal {(D)}}}.[16]Practical experience demonstrated that OWL DL lacked several key features necessary to model complex domains.[2]
In DL, a distinction is drawn between the so-calledTBox(terminological box) and theABox(assertional box). In general, the TBox contains sentences describing concept hierarchies (i.e., relations betweenconcepts) while the ABox containsground sentencesstating where in the hierarchy, individuals belong (i.e., relations between individuals and concepts). For example, the statement:
belongs in the TBox, while the statement:
belongs in the ABox.
Note that the TBox/ABox distinction is not significant, in the same sense that the two "kinds" of sentences are not treated differently in first-order logic (which subsumes most DL). When translated into first-order logic, a subsumptionaxiomlike (1) is simply a conditional restriction tounarypredicates(concepts) with only variables appearing in it. Clearly, a sentence of this form is not privileged or special over sentences in which only constants ("grounded" values) appear like (2).
So why was the distinction introduced? The primary reason is that the separation can be useful when describing and formulating decision-procedures for various DL. For example, a reasoner might process the TBox and ABox separately, in part because certain key inference problems are tied to one but not the other one ('classification' is related to the TBox, 'instance checking' to the ABox). Another example is that the complexity of the TBox can greatly affect the performance of a given decision-procedure for a certain DL, independently of the ABox. Thus, it is useful to have a way to talk about that specific part of theknowledge base.
The secondary reason is that the distinction can make sense from the knowledge base modeler's perspective. It is plausible to distinguish between our conception of terms/concepts in the world (class axioms in the TBox) and particular manifestations of those terms/concepts (instance assertions in the ABox). In the above example: when the hierarchy within a company is the same in every branch but the assignment to employees is different in every department (because there are other people working there), it makes sense to reuse the TBox for different branches that do not use the same ABox.
There are two features of description logic that are not shared by most other data description formalisms: DL does not make theunique name assumption(UNA) or theclosed-world assumption(CWA). Not having UNA means that two concepts with different names may be allowed by some inference to be shown to be equivalent. Not having CWA, or rather having theopen world assumption(OWA) means that lack of knowledge of a fact does not immediately imply knowledge of the negation of a fact.
Likefirst-order logic(FOL), asyntaxdefines which collections of symbols are legal expressions in a description logic, andsemanticsdetermine meaning. Unlike FOL, a DL may have several well known syntactic variants.[8]
The syntax of a member of the description logic family is characterized by its recursive definition, in which the constructors that can be used to form concept terms are stated. Some constructors are related to logical constructors infirst-order logic(FOL) such asintersectionorconjunctionof concepts,unionordisjunctionof concepts,negationorcomplementof concepts,universal restrictionandexistential restriction. Other constructors have no corresponding construction in FOL including restrictions on roles for example, inverse,transitivityand functionality.
Let C and D be concepts, a and b be individuals, and R be a role.
If a is R-related to b, then b is called an R-successor of a.
The prototypical DLAttributive Concept Language with Complements(ALC{\displaystyle {\mathcal {ALC}}}) was introduced by Manfred Schmidt-Schauß and Gert Smolka in 1991, and is the basis of many more expressive DLs.[5]The following definitions follow the treatment in Baader et al.[5]
LetNC{\displaystyle N_{C}},NR{\displaystyle N_{R}}andNO{\displaystyle N_{O}}be (respectively)setsofconcept names(also known asatomic concepts),role namesandindividual names(also known asindividuals,nominalsorobjects). Then the ordered triple (NC{\displaystyle N_{C}},NR{\displaystyle N_{R}},NO{\displaystyle N_{O}}) is thesignature.
The set ofALC{\displaystyle {\mathcal {ALC}}}conceptsis the smallest set such that:
Ageneral concept inclusion(GCI) has the formC⊑D{\displaystyle C\sqsubseteq D}whereC{\displaystyle C}andD{\displaystyle D}areconcepts. WriteC≡D{\displaystyle C\equiv D}whenC⊑D{\displaystyle C\sqsubseteq D}andD⊑C{\displaystyle D\sqsubseteq C}. ATBoxis any finite set of GCIs.
AnABoxis a finite set of assertional axioms.
Aknowledge base(KB) is an ordered pair(T,A){\displaystyle ({\mathcal {T}},{\mathcal {A}})}forTBoxT{\displaystyle {\mathcal {T}}}andABoxA{\displaystyle {\mathcal {A}}}.
Thesemanticsof description logics are defined by interpreting concepts as sets of individuals and roles as sets of ordered pairs of individuals. Those individuals are typically assumed from a given domain. The semantics of non-atomic concepts and roles is then defined in terms of atomic concepts and roles. This is done by using a recursive definition similar to the syntax.
The following definitions follow the treatment in Baader et al.[5]
Aterminological interpretationI=(ΔI,⋅I){\displaystyle {\mathcal {I}}=(\Delta ^{\mathcal {I}},\cdot ^{\mathcal {I}})}over asignature(NC,NR,NO){\displaystyle (N_{C},N_{R},N_{O})}consists of
such that
DefineI⊨{\displaystyle {\mathcal {I}}\models }(readin I holds) as follows
LetK=(T,A){\displaystyle {\mathcal {K}}=({\mathcal {T}},{\mathcal {A}})}be a knowledge base.
In addition to the ability to describe concepts formally, one also would like to employ the description of a set of concepts to ask questions about the concepts and instances described. The most common decision problems are basic database-query-like questions likeinstance checking(is a particular instance (member of an ABox) a member of a given concept) andrelation checking(does a relation/role hold between two instances, in other words doesahave propertyb), and the more global-database-questions likesubsumption(is a concept a subset of another concept), andconcept consistency(is there no contradiction among the definitions or chain of definitions). The more operators one includes in a logic and the more complicated the TBox (having cycles, allowing non-atomic concepts to include each other), usually the higher the computational complexity is for each of these problems (seeDescription Logic Complexity Navigatorfor examples).
Many DLs aredecidablefragmentsoffirst-order logic(FOL)[5]and are usually fragments oftwo-variable logicorguarded logic. In addition, some DLs have features that are not covered in FOL; this includesconcrete domains(such as integer or strings, which can be used as ranges for roles such ashasAgeorhasName) or an operator on roles for thetransitive closureof that role.[5]
Fuzzy description logics combinesfuzzy logicwith DLs. Since many concepts that are needed forintelligent systemslack well defined boundaries, or precisely defined criteria of membership, fuzzy logic is needed to deal with notions of vagueness and imprecision. This offers a motivation for a generalization of description logic towards dealing with imprecise and vague concepts.
Description logic is related to—but developed independently of—modal logic(ML).[5]Many—but not all—DLs are syntactic variants of ML.[5]
In general, an object corresponds to apossible world, a concept corresponds to a modal proposition, and a role-bounded quantifier to a modal operator with that role as its accessibility relation.
Operations on roles (such as composition, inversion, etc.) correspond to the modal operations used indynamic logic.[17]
Temporal description logic represents—and allows reasoning about—time dependent concepts and many different approaches to this problem exist.[18]For example, a description logic might be combined with amodaltemporal logicsuch aslinear temporal logic.
There are somesemantic reasonersthat deal with OWL and DL. These are some of the most popular:
|
https://en.wikipedia.org/wiki/Description_logic
|
Reliability engineeringis a sub-discipline ofsystems engineeringthat emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure.[1]Reliability is closely related toavailability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
Thereliability functionis theoretically defined as theprobabilityof success. In practice, it is calculated using different techniques, and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets, or through reliability testing and reliability modeling.Availability,testability,maintainability, andmaintenanceare often defined as a part of "reliability engineering" in reliability programs. Reliability often plays a key role in thecost-effectivenessof systems.
Reliability engineering deals with the prediction, prevention, and management of high levels of "lifetime" engineeringuncertaintyandrisksof failure. Althoughstochasticparameters define and affect reliability, reliability is not only achieved by mathematics and statistics.[2][3]"Nearly all teaching and literature on the subject emphasize these aspects and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods forpredictionand measurement."[4]For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massivelymultivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to Quality Engineering,safety engineering, andsystem safety, in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on the costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims.[5]
The wordreliabilitycan be traced back to 1816 and is first attested to the poetSamuel Taylor Coleridge.[6]Before World War II the term was linked mostly torepeatability; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use ofstatistical process controlwas promoted by Dr.Walter A. ShewhartatBell Labs,[7]around the time thatWaloddi Weibullwas working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period.
In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published a seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. TheIEEEformed the Reliability Society in 1948. In 1950, theUnited States Department of Defenseformed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment.[8]This group recommended three main ways of working:
In the 1960s, more emphasis was given to reliability testing on component and system levels. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published byRCAand was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as did microwave ovens and a variety of other appliances. Communications systems began to adopt
electronics to replace older mechanical switching systems.Bellcoreissued the first consumer prediction methodology for telecommunications, andSAEdeveloped a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs).
Kam Wong published a paper questioning the bathtub curve[9]—see alsoreliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding thephysics of failure. Failure rates for components kept dropping, but system-level issues became more prominent.Systems thinkinghas become more and more important. For software, the CMM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification. The expansion of the World Wide Web created new challenges of security and trust. The older problem of too little reliable information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real-time using data. New technologies such as micro-electromechanical systems (MEMS), handheldGPS, and hand-held devices that combine cell phones and computers all represent challenges to maintaining reliability. Product development time continued to shorten through this
decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability has become part of everyday life and consumer expectations.
Reliability is the probability of a product performing its intended function under specified operating conditions in a manner that meets or exceeds customer expectations.[10]
The objectives of reliability engineering, in decreasing order of priority, are:[11]
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to know the methods that can be used for analyzing designs and data.
Reliability engineering for "complex systems" requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:
Effective reliability engineering requires understanding of the basics offailure mechanismsfor which experience, broad engineering skills and good knowledge from many different special fields of engineering are required,[12]for example:
Reliability may be defined in the following ways:
Many engineering techniques are used in reliabilityrisk assessments, such as reliability block diagrams,hazard analysis,failure mode and effects analysis(FMEA),[13]fault tree analysis(FTA),Reliability Centered Maintenance, (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. These analyses must be done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks (statement of work(SoW) requirements) that will be performed for that specific system.
Consistent with the creation ofsafety cases, for example perARP4761, the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that the use of a component or system will not be associated with unacceptable risk. The basic steps to take[14]are to:
Theriskhere is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system.
In ade minimisdefinition, the severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable.
The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels.
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes, and executed by following proven standard work practices.[15]
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separatedocument. Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability,maintainability, and the resulting systemavailability, and is developed early during system development and refined over the system's life cycle. It specifies not only what the reliability engineer does, but also the tasks performed by otherstakeholders. An effective reliability program plan must be approved by top program management, which is responsible for the allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers/customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retrofit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and thetotal cost of ownership(TCO) due to the cost of spare parts, maintenance man-hours, transport costs, storage costs, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. The testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/orpredictive maintenance), although it can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs.
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overallavailabilityneeds and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to be designed to) should constrain the designers from designing particular unreliable items/constructions/interfaces/systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. The creation of proper lower-level requirements is critical.[16]The provision of only quantitative minimum targets (e.g.,Mean Time Between Failure(MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, and the data is non-probabilistic and available already in CAD models. In the case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as a result of very minor deviations in design, process, or anything else.[17]The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels/classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design.
Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, asystems engineering-based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems.[18]
The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures).
As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring.
In practice, most failures can be traced back to some type ofhuman error, for example in:
However, humans are also very good at detecting such failures, correcting them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines.[19]
Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robustsystems engineeringprocess with proper planning and execution of the validation and verification tasks. This also includes the careful organization of data and information sharing and creating a "reliability culture", in the same way, that having a "safety culture" is paramount in the development of safety-critical systems.
Reliability prediction combines:
For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies.
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability.[21]DfR is often used as part of an overallDesign for Excellence (DfX)strategy.
Reliability design begins with the development of a (system)model. Reliability and availability models useblock diagramsandFault Tree Analysisto provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for exampleMean time to repair(MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing.
One of the most important design techniques isredundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures.
Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures.RCM(Reliability Centered Maintenance) programs can be used for this.
For electronic assemblies, there has been an increasing shift towards a different approach calledphysics of failure. This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modernfinite element method(FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design (Monte Carlo Methods/DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is componentderating: i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expectedelectric current.
Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include:
Results from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine theoptimumbalance between reliability requirements and other constraints.
Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000)[23]For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used[4]than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language andpropositionlogic, but also based on experience with similar items. This can for example be seen in descriptions of events infault tree analysis,FMEAanalysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does insafety engineeringor in-general withinsystems engineering.
Correct use of language can also be key to identifying or reducing the risks ofhuman error, which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English orSimplified Technical English, where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design).
Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system'savailabilitybehavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis andreliability block diagrams. At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives.
For part level predictions, two separate fields of investigation are common:
Reliability is defined as theprobabilitythat a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as,
R(t)=Pr{T>t}=∫t∞f(x)dx{\displaystyle R(t)=Pr\{T>t\}=\int _{t}^{\infty }f(x)\,dx\ \!},
wheref(x){\displaystyle f(x)\!}is the failureprobability density functionandt{\displaystyle t}is the length of the period of time (which is assumed to start from time zero).
There are a few key elements of this definition:
Quantitative requirements are specified using reliabilityparameters. The most common reliability parameter is themean time to failure(MTTF), which can also be specified as thefailure rate(this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF).[17]
In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used insystem safetyengineering.
A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobileairbags, thermalbatteriesandmissiles. Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, theprobability of failure on demand(PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems.
For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statisticalconfidence intervals.
The purpose ofreliability testingorreliability verificationis to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. The reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan should be considered.[10]It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action.[24]Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality inR&D, design, and manufacturing.[25]
Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance.[26]Most product on the market requires reliability testing, such as automotive,integrated circuit, heavy machinery used to mine nature resources, Aircraft auto software.[27][28]
Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.[29](The test level nomenclature varies among applications.) For example, performingenvironmental stress screeningtests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk.
With each test both statisticaltype I and type II errorscould be made, depending on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly rejecting a good design (type I error) and the risk of incorrectly accepting a bad design (type II error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; somefailure modesmay take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing,design of experiments, andsimulations.
The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specifiedconfidence levelwith the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented.
Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test andburn-in. These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics.[30]
There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common:[31][32]
The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction.[34][35]
Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statisticalconfidence levelsare used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, anMTBFof 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible.
The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component,subsystemandsystem. Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed.
A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product.[36]A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction.[37]
The purpose ofaccelerated life testing (ALT test)is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:
An accelerated testing program can be broken down into the following steps:
Common ways to determine a life stress relationship are:
Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digitalintegrated circuittechnology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability.
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state.
However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically largecombinationsof inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Despite this difference in the source of failure between software and hardware, severalsoftware reliability modelsbased on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure (Shooman1987), (Musa 2005), (Denney 2005).
As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplinedsoftware engineeringprocess to anticipate and design againstunintended consequences. There is more overlap between softwarequality engineeringand software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards,peer reviews,unit tests,configuration management,software metricsand software models to be used during software development.
A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates.
Software testingis an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individualunits, throughintegrationand full-upsystem testing. All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such ascode coverage.
The Software Engineering Institute'scapability maturity modelis a common means of assessing the overall software development process for reliability and quality purposes.
Structural reliabilityor the reliability of structures is the application of reliability theory to the behavior ofstructures. It is used in both the design and maintenance of different types of structures including concrete and steel structures.[38][39]In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated.
Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system.
Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereassafety engineeringfocuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment.
Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production.[40]
Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries).[40]
Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g.2oo3 voting logic) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in aerospace systems that need continued availability and do not have afail-safemode. For example, aircraft may use triple modular redundancy forflight computersand control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying.
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels).
When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability.
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning.Six Sigmahas its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time.[41]
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications.[42]Manufactured goods quality often focuses on the number of warranty claims during the warranty period.
Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time.[43]Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model.[42]Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production.
Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product.
In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (seeReliability engineering vs Safety engineeringabove).
Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is.
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations havequality controlgroups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematicroot cause analysisthat identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization.
Some of the most common methods to apply to a reliability operational assessment arefailure reporting, analysis, and corrective action systems(FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics.
It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself.
Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom.
The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing.
Systems of any significant complexity are developed by organizations of people, such as a commercialcompanyor agovernmentagency. The reliability engineering organization must be consistent with the company'sorganizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.
There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance orspecialty engineeringorganization, which may include reliability,maintainability,quality, safety,human factors,logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.
In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company.
Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of anintegrated product team.
Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as aprofessional engineerby the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD),[44]theIEEE Reliability Society, theAmerican Society for Quality(ASQ),[45]and the Society of Reliability Engineers (SRE).[46]
http://standards.sae.org/ja1000/1_199903/SAE JA1000/1 Reliability Program Standard Implementation Guide
In the UK, there are more up to date standards maintained under the sponsorship of UK MOD as Defence Standards. The relevant Standards include:
DEF STAN 00-40 Reliability and Maintainability (R&M)
DEF STAN 00-42 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES
DEF STAN 00-43 RELIABILITY AND MAINTAINABILITY ASSURANCE ACTIVITY
DEF STAN 00-44 RELIABILITY AND MAINTAINABILITY DATA COLLECTION AND CLASSIFICATION
DEF STAN 00-45 Issue 1: RELIABILITY CENTERED MAINTENANCE
DEF STAN 00-49 Issue 1: RELIABILITY AND MAINTAINABILITY MOD GUIDE TO TERMINOLOGY DEFINITIONS
These can be obtained fromDSTAN. There are also many commercial standards, produced by many organisations including the SAE, MSG, ARP, and IEE.
|
https://en.wikipedia.org/wiki/Reliability_theory
|
In algebra, theprincipal factorof aJ{\displaystyle {\mathcal {J}}}-classJof asemigroupSis equal toJifJis thekernelofS, and toJ∪{0}{\displaystyle J\cup \{0\}}otherwise.
Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Principal_factor
|
Small language models(SLMs) areartificial intelligencelanguage modelsdesigned for humannatural language processingincludinglanguage and text generation. Unlikelarge language models(LLMs), small language models are much smaller in scale and scope.
Typically, an LLM's number of training parameters is in the hundreds of billions, with some models even exceeding a trillion parameters. The size of any LLM is vast because it contains a large amount of information, which allows it to generate better content. However, this requires enormous computational power, making it impossible for an individual to train a large language model using just a single computer andGPU.
Small language models, on the other hand, use far fewer parameters, typically ranging from a few million to a few billion. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device.[1][2][3][4]
This computing article is astub. You can help Wikipedia byexpanding it.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Small_language_model
|
Black-box testing,sometimes referred to asspecification-based testing,[1]is a method ofsoftware testingthat examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing:unit,integration,systemandacceptance. Black-box testing is also used as a method inpenetration testing, where anethical hackersimulates an external hacking or cyber warfare attack with no knowledge of the system being attacked.
Specification-based testing aims to test the functionality of software according to the applicable requirements.[2]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case.
Specific knowledge of the application's code, internal structure and programming knowledge in general is not required.[3]The tester is aware ofwhatthe software is supposed to do but is not aware ofhowit does it. For instance, the tester is aware that a particular input returns a certain, invariable output but is not aware ofhowthe software produces the output in the first place.[4]
Test casesare built around specifications andrequirements, i.e., what the application is supposed to do. Test cases are generally derived from external descriptions of the software, including specifications, requirements and design parameters. Although the tests used are primarilyfunctionalin nature,non-functionaltests may also be used. The test designer selects both valid and invalid inputs and determines the correct output, often with the help of atest oracleor a previous result that is known to be good, without any knowledge of the test object's internal structure.
Typical black-box test design techniques includedecision tabletesting,all-pairs testing,equivalence partitioning,boundary value analysis,cause–effect graph,error guessing,state transitiontesting,use casetesting,user storytesting,domain analysis, and syntax testing.[5][6]
Test coveragerefers to the percentage ofsoftware requirementsthat are tested by black-box testing for a system or application.[7]This is in contrast withcode coverage, which examines the inner workings of a program and measures the degree to which thesource codeof aprogramis executed when a test suite is run.[8]Measuring test coverage makes it possible to quickly detect and eliminate defects, to create a more comprehensivetest suite. and to remove tests that are not relevant for the given requirements.[8][9]
Black-box testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[10]An advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[11]Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.
|
https://en.wikipedia.org/wiki/Black_box_testing
|
CIPURSEis an open security standard fortransitfare collection systems. It makes use ofsmart cardtechnologies and additional security measures.
The CIPURSE open security standard[1]was established by theOpen Standard for Public Transportation Alliance[2]to address the needs of local and regional transit authorities for automatic fare collection systems based on smart card technologies and advanced security measures.
Products developed in conformance with the CIPURSE standard[3]are intended to:
The open CIPURSE standard is intended to:
All of these factors are intended to reduce operating costs and increase flexibility for transport system operators.
In the past, public transport systems were often implemented using standalone, proprietary fare collection systems. In such cases, each fare collection system employed unique fare media (such as its own style of ticket printed on card) and data management systems. Because fare collection systems did not interoperate with each other, payment schemes and tokens varied widely between local and regional systems, and new systems were often costly to develop and maintain.
Transport systems are migrating tomicrocontroller-based fare collection systems. These are converging with similar applications and technologies, such as branded credit-debitpayment cards,micropayments, multi-application cards, andNear Field Communication(NFC) mobile phones and devices. These schemes will enable passengers to use transit tokens seamlessly across multiple transit systems. These new applications demand higher levels of security than most existing schemes that they will replace.
The OSPT Alliance defined the CIPURSE standard to provide an open platform for securing both new and legacy transit fare collection[4]applications. Systems using the CIPURSE open security standard address public transport services, collection of transport fares, and transactions related to micropayments.
The transition to an open standard platform creates opportunities to adopt open standards for important parts of the fare collection system, including data management, the media interface and security. An open standard for developing secure transit fare collection solutions could make systems more cost-effective, secure, flexible, scalable and extensible.
In December 2010, the OSPT Alliance introduced the first draft of the CIPURSE standard. It employs existing, proven open standards, including theISO/IEC 7816smart card standard, as well as the 128-bitAdvanced Encryption Standardand theISO/IEC 14443protocol layer. Designed for low-cost silicon implementations,[citation needed]the CIPURSE security concept uses an authentication scheme that is resistant to most of today’s electronic attacks.
Its security mechanisms include a uniquecryptographic protocolfor fast and efficient implementations with robust, inherent protection againstdifferential power analysis(DPA) andDifferential fault analysisattacks. Because the protocol is inherently resistant to these kinds of attacks and does not require dedicated hardware measures, it should be both more secure and less costly. It is intended to guard againstcounterfeiting,cloning,eavesdropping,man-in-the-middle attacksand other security threats.
The CIPURSE standard also:
OSPT Alliance technology providers are allowed to add functionality outside the common core (which is defined in the standard) to differentiate their products, so long as they do not jeopardize interoperability of the core functions.[5]
Introduced in late 2012, Version 2.0 of the CIPURSE Specification is the latest version. Designed as a layered, modular architecture with application-specific profiles, the open and secure CIPURSE V2 standard comprises a single, consistent set of specifications for all security, personalization, administration and life-cycle management functions needed to create a broad range of interoperable transit applications – from inexpensive single-ride or daily paper tickets to rechargeable fixed-count or weekly plastic tickets to longer-term smart card- or smart phone-based commuter tickets that can also support loyalty and other applications.
Three application-specific profiles – subsets of the CIPURSE V2 standard tailored for different use cases – have been defined, with which vendors are required to comply when creating products targeting these applications:
Products based on different profiles can be added to fare collection systems at any time and can be used in parallel to provide transit operators the greatest flexibility in offering riders a range of transit fare options. Because they are derived from the same set of specifications, all the profiles are interoperable, reflect the same design criteria and have the same appearance, enabling developers to create products according to a family concept. With its modular “onion-layered” design, the CIPURSE standard can be easily enhanced in the future with additional functionality and new profiles created to address changes in technology and business. The CIPURSE V2 specification enables technology suppliers to develop and deliver innovative, more secure and interoperable transit fare collection solutions for cards, stickers, fobs, mobile phones and other consumer devices, as well as infrastructure components.
In early 2013, the OSPT introduced the CIPURSE V2 Mobile Guidelines, a comprehensive set of requirements and use cases for developing and deploying CIPURSE-secured transit fare mobile apps for near field communication (NFC)-enabled smartphones, tablets and other smart devices. Providing everything developers need to implement and use the CIPURSE V2 open security standard when embedded in an NFC mobile device, the new guidelines enable transit operators to enhance their systems to support mobile ticketing with these new form factors.
Founded by smart card manufacturersGiesecke & Devrient GmbH(G&D) andOberthur Technologiesand chip suppliersInfineon Technologies AG, andINSIDE Secure S.A.(formerly INSIDE Contactless) in January 2010, the OSPT Alliance[6]collectively defined the CIPURSE standard.
The Alliance partners test their products for conformance with CIPURSE to demonstrate interoperability,[7]and have engaged an independent test authority to test compliance with the standard, interoperability, and performance.[8]
The OSPT Alliance[9]is a nonprofit industry organization open to technology vendors, transit operators, government agencies, systems integrators, mobile device manufacturers, trusted service operators, consultants, industry associations and others wishing to participate in the organization’s education, marketing and technology development activities.
As of February 2019, Full members of the alliance are:[10]
The alliance is open to companies on the component supply and system integration side, as well as transport agencies and other standards bodies, to contribute their experience and knowledge to the development of the CIPURSE open standard.
|
https://en.wikipedia.org/wiki/CIPURSE
|
Lexical functional grammar(LFG) is aconstraint-basedgrammar frameworkintheoretical linguistics. It posits two separate levels of syntactic structure, aphrase structure grammarrepresentation of word order and constituency, and a representation of grammatical functions such as subject and object, similar todependency grammar. The development of the theory was initiated byJoan BresnanandRonald Kaplanin the 1970s, in reaction to the theory oftransformational grammarwhich was current in the late 1970s. It mainly focuses onsyntax, including its relation withmorphologyandsemantics. There has been little LFG work onphonology(although ideas fromoptimality theoryhave recently been popular in LFG research).
LFG views language as being made up of multiple dimensions of structure. Each of these dimensions is represented as a distinct structure with its own rules, concepts, and form. The primary structures that have figured in LFG research are:
For example, in the sentenceThe old woman eats the falafel, the c-structure analysis is that this is a sentence which is made up of two pieces, a noun phrase (NP) and a verb phrase (VP). The VP is itself made up of two pieces, a verb (V) and another NP. The NPs are also analyzed into their parts. Finally, the bottom of the structure is composed of the words out of which the sentence is constructed. The f-structure analysis, on the other hand, treats the sentence as being composed of attributes, which includefeaturessuch as number andtenseor functional units such assubject,predicate, orobject.
There are other structures which are hypothesized in LFG work:
The various structures can be said to bemutually constraining.
The LFG conception of linguistic structure differs fromChomskyantheories, which have always involved separate levels of constituent structure representation mapped onto each other sequentially, via transformations. The LFG approach has had particular success withnonconfigurational languages, languages in which the relation between structure and function is less direct than it is in languages like English; for this reason LFG's adherents consider it a more plausible universal model of language.
Another feature of LFG is that grammatical-function changing operations likepassivizationare relations between word forms rather than sentences. This means that the active-passive relation, for example, is a relation between two types of verb rather than two trees. Active and passive verbs involve alternative mapping of the participants to grammatical functions.
Through the positing of productive processes in the lexicon and the separation of structure and function, LFG is able to account for syntactic patterns without the use of transformations defined over syntactic structure. For example, in a sentence likeWhat did you see?, wherewhatis understood as the object ofsee, transformational grammar putswhataftersee(the usual position for objects) in "deep structure", and then moves it. LFG analyzeswhatas having two functions: question-focus and object. It occupies the position associated in English with the question-focus function, and the constraints of the language allow it to take on the object function as well.
A central goal in LFG research is to create a model of grammar with a depth which appeals to linguists while at the same time being efficientlyparsableand having the rigidity of formalism which computational linguists require. Because of this, computational parsers have been developed and LFG has also been used as the theoretical basis of variousmachine translationtools, such asAppTek's TranSphere, and the Julietta Research Group's Lekta.
|
https://en.wikipedia.org/wiki/Lexical_functional_grammar
|
SPORE, theSecurity Protocols Open Repository, is an online library ofsecurity protocolswith comments and links to papers. Each protocol is downloadable in a variety of formats, including rules for use with automatic protocol verification tools. All protocols are described usingBAN logicor the style used by Clark and Jacob, and their goals. The database includes details on formal proofs or known attacks, with references to comments, analysis & papers. A large number of protocols are listed, including many which have been shown to be insecure.
It is a continuation of the seminal work byJohn ClarkandJeremy Jacob.[1]
They seek contributions for new protocols, links and comments.
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Security_Protocols_Open_Repository
|
The English language uses manyGreekandLatinroots,stems, andprefixes. These roots are listed alphabetically on three pages:
Some of those used inmedicineand medical technology are listed in theList of medical roots, suffixes and prefixes.
|
https://en.wikipedia.org/wiki/List_of_Greek_and_Latin_roots_in_English
|
TheHuygens–Fresnel principle(named afterDutchphysicistChristiaan HuygensandFrenchphysicistAugustin-Jean Fresnel) states that every point on awavefrontis itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutuallyinterfere.[1]The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminouswave propagationboth in thefar-field limitand in near-fielddiffractionas well asreflection.
In 1678, Huygens proposed[2]that every point reached by a luminous disturbance becomes a source of a spherical wave. The sum of these secondary waves determines the form of the wave at any subsequent time; the overall procedure is referred to asHuygens' construction.[3]: 132He assumed that the secondary waves travelled only in the "forward" direction, and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known asdiffractioneffects.[4]
In 1818, Fresnel[5]showed that Huygens's principle, together with his own principle ofinterference, could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation, but led to predictions that agreed with many experimental observations, including thePoisson spot.
Poissonwas a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However,François Arago, another member of the committee, performed the experiment and showed thatthe prediction was correct.[3]This success was important evidence in favor of the wave theory of light over then predominantcorpuscular theory.
In 1882,Gustav Kirchhoffanalyzed Fresnel's theory in a rigorous mathematical formulation, as an approximate form of an integral theorem.[3]: 375Very few rigorous solutions to diffraction problems are known however, and most problems in optics are adequately treated using the Huygens-Fresnel principle.[3]: 370
In 1939Edward Copson, extended the Huygens' original principle to consider the polarization of light, which requires a vector potential, in contrast to the scalar potential of a simpleocean waveorsound wave.[6][7]
Inantenna theoryand engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known assurface equivalence principle.[8][9]
Issues in Huygens-Fresnel theory continue to be of interest. In 1991,David A. B. Millersuggested that treating the source as a dipole (not the monopole assumed by Huygens) will cancel waves propagating in the reverse direction, making Huygens' construction quantitatively correct.[10]In 2021, Forrest L. Anderson showed that treating the wavelets asDirac delta functions, summing and differentiating the summation is sufficient to cancel reverse propagating waves.[11]
The apparent change in direction of a light ray as it enters a sheet of glass at angle can be understood by the Huygens construction. Each point on the surface of the glass gives a secondary wavelet. These wavelets propagate at a slower velocity in the glass, making less forward progress than their counterparts in air. When the wavelets are summed, the resulting wavefront propagates at an angle to the direction of the wavefront in air.[12]: 56
In an inhomogeneous medium with a variable index of refraction, different parts of the wavefront propagate at different speeds. Consequently the wavefront bends around in the direction of higher index.[12]: 68
The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving theKirchhoff's diffraction formulaand the approximations ofnear fielddue to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered.[3]
Kirchhoff's diffraction formulaprovides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation.[13]
A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound.
Consider the case of a point source located at a pointP0, vibrating at afrequencyf. The disturbance may be described by a complex variableU0known as thecomplex amplitude. It produces a spherical wave withwavelengthλ,wavenumberk= 2π/λ. Within a constant of proportionality, the complex amplitude of the primary wave at the pointQlocated at a distancer0fromP0is:
Note thatmagnitudedecreases in inverse proportion to the distance traveled, and the phase changes asktimes the distance traveled.
Using Huygens's theory and theprinciple of superpositionof waves, the complex amplitude at a further pointPis found by summing the contribution from each point on the sphere of radiusr0. In order to get an agreement with experimental results, Fresnel found that the individual contributions from the secondary waves on the sphere had to be multiplied by a constant, −i/λ, and by an additional inclination factor,K(χ). The first assumption means that the secondary waves oscillate at a quarter of a cycle out of phase with respect to the primary wave and that the magnitude of the secondary waves are in a ratio of 1:λ to the primary wave. He also assumed thatK(χ) had a maximum value when χ = 0, and was equal to zero when χ = π/2, where χ is the angle between the normal of the primary wavefront and the normal of the secondary wavefront. The complex amplitude atP, due to the contribution of secondary waves, is then given by:[14]
whereSdescribes the surface of the sphere, andsis the distance betweenQandP.
Fresnel used a zone construction method to find approximate values ofKfor the different zones,[3]which enabled him to make predictions that were in agreement with experimental results. Theintegral theorem of Kirchhoffincludes the basic idea of Huygens–Fresnel principle. Kirchhoff showed that in many cases, the theorem can be approximated to a simpler form that is equivalent to the formation of Fresnel's formulation.[3]
For an aperture illumination consisting of a single expanding spherical wave, if the radius of the curvature of the wave is sufficiently large, Kirchhoff gave the following expression forK(χ):[3]
Khas a maximum value at χ = 0 as in the Huygens–Fresnel principle; however,Kis not equal to zero at χ = π/2, but at χ = π.
Above derivation ofK(χ) assumed that the diffracting aperture is illuminated by a single spherical wave with a sufficiently large radius of curvature. However, the principle holds for more general illuminations.[14]An arbitrary illumination can be decomposed into a collection of point sources, and the linearity of the wave equation can be invoked to apply the principle to each point source individually.K(χ) can be generally expressed as:[14]
In this case,Ksatisfies the conditions stated above (maximum value at χ = 0 and zero at χ = π/2).
Many books and references – e.g. (Greiner, 2002)[15]and (Enders, 2009)[16]- refer to the Generalized Huygens' Principle using the definition in (Feynman, 1948).[17]
Feynman defines the generalized principle in the following way:
"Actually Huygens’ principle is not correct in optics. It is replaced by Kirchoff’s [sic] modification which requires that both the amplitude and its derivative must be known on the adjacent surface. This is a consequence of the fact that the wave equation in optics is second order in the time. The wave equation of quantum mechanics is first order in the time; therefore, Huygens’ principle is correct for matter waves, action replacing time."
This clarifies the fact that in this context the generalized principle reflects the linearity of quantum mechanics and the fact that the quantum mechanics equations are first order in time. Finally only in this case the superposition principle fully apply, i.e. the wave function in a point P can be expanded as a superposition of waves on a border surface enclosing P. Wave functions can be interpreted in the usual quantum mechanical sense as probability densities where the formalism ofGreen's functionsandpropagatorsapply. What is note-worthy is that this generalized principle is applicable for "matter waves" and not for light waves any more. The phase factor is now clarified as given by theactionand there is no more confusion why the phases of the wavelets are different from the one of the original wave and modified by the additional Fresnel parameters.
As per Greiner[15]the generalized principle can be expressed fort′>t{\displaystyle t'>t}in the form:
whereGis the usual Green function that propagates in time the wave functionψ{\displaystyle \psi }. This description resembles and generalize the initial Fresnel's formula of the classical model.
Huygens' theory served as a fundamental explanation of the wave nature of light interference and was further developed by Fresnel and Young but did not fully resolve all observations such as the low-intensitydouble-slit experimentfirst performed by G. I. Taylor in 1909. It was not until the early and mid-1900s that quantum theory discussions, particularly the early discussions at the 1927 BrusselsSolvay Conference, whereLouis de Broglieproposed his de Broglie hypothesis that the photon is guided by a wave function.[18]
The wave function presents a much different explanation of the observed light and dark bands in a double slit experiment. In this conception, the photon follows a path which is a probabilistic choice of one of many possible paths in the electromagnetic field. These probable paths form the pattern: in dark areas, no photons are landing, and in bright areas, many photons are landing. The set of possible photon paths is consistent with Richard Feynman's path integral theory, the paths determined by the surroundings: the photon's originating point (atom), the slit, and the screen and by tracking and summing phases. The wave function is a solution to this geometry. The wave function approach was further supported by additional double-slit experiments in Italy and Japan in the 1970s and 1980s with electrons.[19]
Huygens' principle can be seen as a consequence of thehomogeneityof space—space is uniform in all locations.[20]Any disturbance created in a sufficiently small region of homogeneous space (or in a homogeneous medium) propagates from that region in all geodesic directions. The waves produced by this disturbance, in turn, create disturbances in other regions, and so on. Thesuperpositionof all the waves results in the observed pattern of wave propagation.
Homogeneity of space is fundamental toquantum field theory(QFT) where thewave functionof any object propagates along all available unobstructed paths. Whenintegrated along all possible paths, with aphasefactor proportional to theaction, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets.
In 1900,Jacques Hadamardobserved that Huygens' principle was broken when the number of spatial dimensions is even.[21][22][23]From this, he developed a set of conjectures that remain an active topic of research.[24][25]In particular, it has been discovered that Huygens' principle holds on a large class ofhomogeneous spacesderived from theCoxeter group(so, for example, theWeyl groupsof simpleLie algebras).[20][26]
The traditional statement of Huygens' principle for theD'Alembertiangives rise to theKdV hierarchy; analogously, theDirac operatorgives rise to theAKNShierarchy.[27][28]
|
https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_principle
|
Adatabase transactionsymbolizes aunit of work, performed within adatabase management system(or similar system) against adatabase, that is treated in a coherent and reliable way independent of other transactions. A transaction generally represents any change in a database. Transactions in a database environment have two main purposes:
In a database management system, a transaction is a single unit of logic or work, sometimes made up of multiple operations. Any logical calculation done in a consistent mode in a database is known as a transaction. One example is a transfer from one bank account to another: the complete transaction requires subtracting the amount to be transferred from one account and adding that same amount to the other.
A database transaction, by definition, must beatomic(it must either be complete in its entirety or have no effect whatsoever),consistent(it must conform to existing constraints in the database),isolated(it must not affect other transactions) anddurable(it must get written to persistent storage).[1]Database practitioners often refer to these properties of database transactions using the acronymACID.
Databasesand other data stores which treat theintegrityof data as paramount often include the ability to handle transactions to maintain the integrity of data. A single transaction consists of one or more independent units of work, each reading and/or writing information to a database or other data store. When this happens it is often important to ensure that all such processing leaves the database or data store in a consistent state.
Examples fromdouble-entry accounting systemsoften illustrate the concept of transactions. In double-entry accounting every debit requires the recording of an associated credit. If one writes a check for $100 to buy groceries, a transactional double-entry accounting system must record the following two entries to cover the single transaction:
A transactional system would make both entries pass or both entries would fail. By treating the recording of multiple entries as an atomic transactional unit of work the system maintains the integrity of the data recorded. In other words, nobody ends up with a situation in which a debit is recorded but no associated credit is recorded, or vice versa.
Atransactional databaseis aDBMSthat provides theACID propertiesfor a bracketed set of database operations (begin-commit). Transactions ensure that the database is always in a consistent state, even in the event of concurrent updates and failures.[2]All the write operations within a transaction have an all-or-nothing effect, that is, either the transaction succeeds and all writes take effect, or otherwise, the database is brought to a state that does not include any of the writes of the transaction. Transactions also ensure that the effect of concurrent transactions satisfies certain guarantees, known asisolation level. The highest isolation level isserializability, which guarantees that the effect of concurrent transactions is equivalent to their serial (i.e. sequential) execution.
Most modern[update]relational database management systemssupport transactions.NoSQLdatabases prioritize scalability along with supporting transactions in order to guarantee data consistency in the event of concurrent updates and accesses.
In a database system, a transaction might consist of one or more data-manipulation statements and queries, each reading and/or writing information in the database. Users ofdatabase systemsconsiderconsistencyandintegrityof data as highly important. A simple transaction is usually issued to the database system in a language likeSQLwrapped in a transaction, using a pattern similar to the following:
A transaction commit operation persists all the results of data manipulations within the scope of the transaction to the database. A transaction rollback operation does not persist the partial results of data manipulations within the scope of the transaction to the database. In no case can a partial transaction be committed to the database since that would leave the database in an inconsistent state.
Internally, multi-user databases store and process transactions, often by using a transactionIDor XID.
There are multiple varying ways for transactions to be implemented other than the simple way documented above.Nested transactions, for example, are transactions which contain statements within them that start new transactions (i.e. sub-transactions).Multi-level transactionsare a variant of nested transactions where the sub-transactions take place at different levels of a layered system architecture (e.g., with one operation at the database-engine level, one operation at the operating-system level).[3]Another type of transaction is thecompensating transaction.
Transactions are available in most SQL database implementations, though with varying levels of robustness. For example,MySQLbegan supporting transactions from early version 3.23, but theInnoDBstorage engine was not default before version 5.5. The earlier available storage engine,MyISAMdoes not support transactions.
A transaction is typically started using the commandBEGIN(although the SQL standard specifiesSTART TRANSACTION). When the system processes aCOMMITstatement, the transaction ends with successful completion. AROLLBACKstatement can also end the transaction, undoing any work performed sinceBEGIN. Ifautocommitwas disabled with the start of a transaction, autocommit will also be re-enabled with the end of the transaction.
One can set theisolation levelfor individual transactional operations as well as globally. At the highest level (READ COMMITTED), the result of any operation performed after a transaction has started will remain invisible to other database users until the transaction has ended. At the lowest level (READ UNCOMMITTED), which may occasionally be used to ensure high concurrency, such changes will be immediately visible.
Relational databases are traditionally composed of tables with fixed-size fields and records. Object databases comprise variable-sizedblobs, possiblyserializableor incorporating amime-type. The fundamental similarities between Relational and Object databases are the start and thecommitorrollback.
After starting a transaction, database records or objects are locked, either read-only or read-write. Reads and writes can then occur. Once the transaction is fully defined, changes are committed or rolled backatomically, such that at the end of the transaction there is noinconsistency.
Database systems implementdistributed transactions[4]as transactions accessing data over multiple nodes. A distributed transaction enforces the ACID properties over multiple nodes, and might include systems such as databases, storage managers, file systems, messaging systems, and other data managers. In a distributed transaction there is typically an entity coordinating all the process to ensure that all parts of the transaction are applied to all relevant systems. Moreover, the integration of Storage as a Service (StaaS) within these environments is crucial, as it offers a virtually infinite pool of storage resources, accommodating a range of cloud-based data store classes with varying availability, scalability, and ACID properties. This integration is essential for achieving higher availability, lower response time, and cost efficiency in data-intensive applications deployed across cloud-based data stores.[5]
TheNamesysReiser4filesystem forLinux[6]supports transactions, and as ofMicrosoftWindows Vista, the MicrosoftNTFSfilesystem[7]supportsdistributed transactionsacross networks. There is occurring research into more data coherent filesystems, such as theWarp Transactional Filesystem(WTF).[8]
|
https://en.wikipedia.org/wiki/Database_transaction
|
Ahybrid systemis adynamical systemthat exhibits both continuous and discrete dynamic behavior – a system that can bothflow(described by adifferential equation) andjump(described by astate machine,automaton, or adifference equation).[1]Often, the term "hybrid dynamical system" is used instead of "hybrid system", to distinguish from other usages of "hybrid system", such as the combinationneural netsandfuzzy logic, or of electrical and mechanical drivelines. A hybrid system has the benefit of encompassing a larger class of systems within its structure, allowing for more flexibility in modeling dynamic phenomena.
In general, thestateof a hybrid system is defined by the values of thecontinuous variablesand a discretemode. The state changes either continuously, according to aflowcondition, or discretely according to acontrol graph. Continuous flow is permitted as long as so-calledinvariantshold, while discrete transitions can occur as soon as givenjump conditionsare satisfied. Discrete transitions may be associated withevents.
Hybrid systems have been used to model several cyber-physical systems, includingphysical systemswithimpact, logic-dynamiccontrollers, and evenInternetcongestion.
A canonical example of a hybrid system is thebouncing ball, a physical system with impact. Here, the ball (thought of as a point-mass) is dropped from an initial height and bounces off the ground, dissipating its energy with each bounce. The ball exhibits continuous dynamics between each bounce; however, as the ball impacts the ground, its velocity undergoes a discrete change modeled after aninelastic collision. A mathematical description of the bouncing ball follows. Letx1{\displaystyle x_{1}}be the height of the ball andx2{\displaystyle x_{2}}be the velocity of the ball. A hybrid system describing the ball is as follows:
Whenx∈C={x1>0}{\displaystyle x\in C=\{x_{1}>0\}}, flow is governed byx˙1=x2,x˙2=−g{\displaystyle {\dot {x}}_{1}=x_{2},{\dot {x}}_{2}=-g},
whereg{\displaystyle g}is the acceleration due to gravity. These equations state that when the ball is above ground, it is being drawn to the ground by gravity.
Whenx∈D={x1=0}{\displaystyle x\in D=\{x_{1}=0\}}, jumps are governed byx1+=x1,x2+=−γx2{\displaystyle x_{1}^{+}=x_{1},x_{2}^{+}=-\gamma x_{2}},
where0<γ<1{\displaystyle 0<\gamma <1}is a dissipation factor. This is saying that when the height of the ball is zero (it has impacted the ground), its velocity is reversed and decreased by a factor ofγ{\displaystyle \gamma }. Effectively, this describes the nature of the inelastic collision.
The bouncing ball is an especially interesting hybrid system, as it exhibitsZenobehavior. Zeno behavior has a strict mathematical definition, but can be described informally as the system making aninfinitenumber of jumps in afiniteamount of time. In this example, each time the ball bounces it loses energy, making the subsequent jumps (impacts with the ground) closer and closer together in time.
It is noteworthy that the dynamical model is complete if and only if one adds the contact force between the ground and the ball. Indeed, without forces, one cannot properly define the bouncing ball and the model is, from a mechanical point of view, meaningless. The simplest contact model that represents the interactions between the ball and the ground, is the complementarity relation between the force and the distance (the gap) between the ball and the ground. This is written as0≤λ⊥x1≥0.{\displaystyle 0\leq \lambda \perp x_{1}\geq 0.}Such a contact model does not incorporate magnetic forces, nor gluing effects. When the complementarity relations are in, one can continue to integrate the system after the impacts have accumulated and vanished: the equilibrium of the system is well-defined as the static equilibrium of the ball on the ground, under the action of gravity compensated by the contact forceλ{\displaystyle \lambda }. One also notices from basic convex analysis that the complementarity relation can equivalently be rewritten as the inclusion into a normal cone, so that the bouncing ball dynamics is a differential inclusion into a normal cone to a convex set. See Chapters 1, 2 and 3 in Acary-Brogliato's book cited below (Springer LNACM 35, 2008). See also the other references on non-smooth mechanics.
There are approaches to automaticallyprovingproperties of hybrid systems (e.g., some of the tools mentioned below). Common techniques for proving safety of hybrid systems are computation of reachable sets,abstraction refinement, andbarrier certificates.
Most verification tasks are undecidable,[2]making general verificationalgorithmsimpossible. Instead, the tools are analyzed for their capabilities on benchmark problems. A possible theoretical characterization of this is algorithms that succeed with hybrid systems verification in all robust cases[3]implying that many problems for hybrid systems, while undecidable, are at least quasi-decidable.[4]
Two basic hybrid system modeling approaches can be classified, an implicit and an explicit one. The explicit approach is often represented by ahybrid automaton, ahybrid programor a hybridPetri net. The implicit approach is often represented by guarded equations to result in systems ofdifferential algebraic equations(DAEs) where the active equations may change, for example by means of ahybrid bond graph.
As a unified simulation approach for hybrid system analysis, there is a method based onDEVSformalism in which integrators for differential equations are quantized into atomicDEVSmodels. These methods generate traces of system behaviors in discrete event system manner which are different from discrete time systems. Detailed of this approach can be found in references [Kofman2004] [CF2006] [Nutaro2010] and the software toolPowerDEVS.
|
https://en.wikipedia.org/wiki/Hybrid_system
|
No instruction set computing(NISC) is a computing architecture and compiler technology for designing highly efficient custom processors and hardware accelerators by allowing a compiler to have low-level control of hardware resources.
NISC is a statically scheduled horizontal nanocoded architecture (SSHNA). The term "statically scheduled" means that theoperation schedulingandHazardhandling are done by acompiler. The term "horizontal nanocoded" means that NISC does not have any predefinedinstruction setormicrocode. The compiler generates nanocodes which directly controlfunctional units,registersandmultiplexersof a givendatapath. Giving low-level control to the compiler enables better utilization of datapath resources, which ultimately result in better performance. The benefits of NISC technology are:
The instruction set and controller ofprocessorsare the most tedious and time-consuming parts to design. By eliminating these two, design of custom processing elements become significantly easier.
Furthermore, the datapath of NISC processors can even be generated automatically for a given application. Therefore, designer's productivity is improved significantly.
Since NISC datapaths are very efficient and can be generated automatically, NISC technology is comparable tohigh level synthesis(HLS) orC to HDLsynthesis approaches. In fact, one of the benefits of this architecture style is its capability to bridge these two technologies (custom processor design and HLS).
Incomputer science,zero instruction set computer(ZISC) refers to acomputer architecturebased solely onpattern matchingand absence of(micro-)instructionsin the classical[clarification needed]sense. These chips are known for being thought of as comparable to theneural networks, being marketed for the number of "synapses" and "neurons".[1]TheacronymZISC alludes toreduced instruction set computer(RISC).[citation needed]
ZISC is a hardware implementation ofKohonen networks(artificial neural networks) allowing massively parallel processing of very simple data (0 or 1). This hardware implementation was invented by Guy Paillet[2]and Pascal Tannhof (IBM),[3][2]developed in cooperation with the IBM chip factory ofEssonnes, in France, and was commercialized by IBM.
The ZISC architecture alleviates thememory bottleneck[clarification needed]by blending pattern memory with pattern learning and recognition logic.[how?]Their massivelyparallel computingsolves the"winner takes all problem in action selection"[clarification neededfromWinner-takes-allproblem inNeural Networks]by allotting each "neuron" its own memory and allowing simultaneous problem-solving the results of which are settled up disputing with each other.[4]
According toTechCrunch, software emulations of these types of chips are currently used for image recognition by many large tech companies, such asFacebookandGoogle. When applied to other miscellaneous pattern detection tasks, such as with text, results are said to be produced in microseconds even with chips released in 2007.[1]
Junko Yoshida, of theEE Times, compared the NeuroMem chip with "The Machine", a machine capable of being able to predict crimes from scanning people's faces from the television seriesPerson of Interest, describing it as "the heart ofbig data" and "foreshadow[ing] a real-life escalation in the era of massive data collection".[5]
In the past, microprocessor design technology evolved fromcomplex instruction set computer(CISC) toreduced instruction set computer(RISC). In the early days of the computer industry, compiler technology did not exist and programming was done inassembly language. To make programming easier, computer architects created complex instructions which were direct representations of high level functions of high level programming languages. Another force that encouraged instruction complexity was the lack of large memory blocks.
As compiler and memory technologies advanced, RISC architectures were introduced. RISC architectures need more instruction memory and require a compiler to translate high-level languages to RISC assembly code. Further advancement of compiler and memory technologies leads to emergingvery long instruction word(VLIW) processors, where the compiler controls the schedule of instructions and handles data hazards.
NISC is a successor of VLIW processors. In NISC, the compiler has both horizontal and vertical control of the operations in the datapath. Therefore, the hardware is much simpler. However the control memory size is larger than the previous generations. To address this issue, low-overhead compression techniques can be used.
|
https://en.wikipedia.org/wiki/No_instruction_set_computing
|
Inprobability theory, abeta negative binomial distributionis theprobability distributionof adiscreterandom variableX{\displaystyle X}equal to the number of failures needed to getr{\displaystyle r}successes in a sequence ofindependentBernoulli trials. The probabilityp{\displaystyle p}of success on each trial stays constant within any given experiment but varies across different experiments following abeta distribution. Thus the distribution is acompound probability distribution.
This distribution has also been called both theinverse Markov-Pólya distributionand thegeneralized Waring distribution[1]or simply abbreviated as theBNBdistribution. A shifted form of the distribution has been called thebeta-Pascal distribution.[1]
If parameters of the beta distribution areα{\displaystyle \alpha }andβ{\displaystyle \beta }, and if
where
then the marginal distribution ofX{\displaystyle X}(i.e. theposterior predictive distribution) is a beta negative binomial distribution:
In the above,NB(r,p){\displaystyle \mathrm {NB} (r,p)}is thenegative binomial distributionandB(α,β){\displaystyle {\textrm {B}}(\alpha ,\beta )}is thebeta distribution.
DenotingfX|p(k|q),fp(q|α,β){\displaystyle f_{X|p}(k|q),f_{p}(q|\alpha ,\beta )}the densities of the negative binomial and beta distributions respectively, we obtain the PMFf(k|α,β,r){\displaystyle f(k|\alpha ,\beta ,r)}of the BNB distribution by marginalization:
Noting that the integral evaluates to:
we can arrive at the following formulas by relatively simple manipulations.
Ifr{\displaystyle r}is an integer, then the PMF can be written in terms of thebeta function,:
More generally, the PMF can be written
or
Using the properties of theBeta function, the PMF with integerr{\displaystyle r}can be rewritten as:
More generally, the PMF can be written as
The PMF is often also presented in terms of thePochammer symbolfor integerr{\displaystyle r}
Thek-thfactorial momentof a beta negative binomial random variableXis defined fork<α{\displaystyle k<\alpha }and in this case is equal to
The beta negative binomial isnon-identifiablewhich can be seen easily by simply swappingr{\displaystyle r}andβ{\displaystyle \beta }in the above density orcharacteristic functionand noting that it is unchanged. Thusestimationdemands that aconstraintbe placed onr{\displaystyle r},β{\displaystyle \beta }or both.
The beta negative binomial distribution contains the beta geometric distribution as a special case when eitherr=1{\displaystyle r=1}orβ=1{\displaystyle \beta =1}. It can therefore approximate thegeometric distributionarbitrarily well. It also approximates the negative binomial distribution arbitrary well for largeα{\displaystyle \alpha }. It can therefore approximate thePoisson distributionarbitrarily well for largeα{\displaystyle \alpha },β{\displaystyle \beta }andr{\displaystyle r}.
ByStirling's approximationto the beta function, it can be easily shown that for largek{\displaystyle k}
which implies that the beta negative binomial distribution isheavy tailedand thatmomentsless than or equal toα{\displaystyle \alpha }do not exist.
The beta geometric distribution is an important special case of the beta negative binomial distribution occurring forr=1{\displaystyle r=1}. In this case the pmf simplifies to
This distribution is used in someBuy Till you Die(BTYD) models.
Further, whenβ=1{\displaystyle \beta =1}the beta geometric reduces to theYule–Simon distribution. However, it is more common to define the Yule-Simon distribution in terms of a shifted version of the beta geometric. In particular, ifX∼BG(α,1){\displaystyle X\sim BG(\alpha ,1)}thenX+1∼YS(α){\displaystyle X+1\sim YS(\alpha )}.
In the case when the 3 parametersr,α{\displaystyle r,\alpha }andβ{\displaystyle \beta }are positive integers, the Beta negative binomial can also be motivated by anurn model- or more specifically a basicPólya urn model. Consider an urn initially containingα{\displaystyle \alpha }red balls (the stopping color) andβ{\displaystyle \beta }blue balls. At each step of the model, a ball is drawn at random from the urn and replaced, along with one additional ball of the same color. The process is repeated over and over, untilr{\displaystyle r}red colored balls are drawn. The random variableX{\displaystyle X}of observed draws of blue balls are distributed according to aBNB(r,α,β){\displaystyle \mathrm {BNB} (r,\alpha ,\beta )}. Note, at the end of the experiment, the urn always contains the fixed numberr+α{\displaystyle r+\alpha }of red balls while containing the random numberX+β{\displaystyle X+\beta }blue balls.
By the non-identifiability property,X{\displaystyle X}can be equivalently generated with the urn initially containingα{\displaystyle \alpha }red balls (the stopping color) andr{\displaystyle r}blue balls and stopping whenβ{\displaystyle \beta }red balls are observed.
|
https://en.wikipedia.org/wiki/Beta_negative_binomial_distribution
|
Robert Anson Heinlein(/ˈhaɪnlaɪn/HYNE-lyne;[2][3][4]July 7, 1907 – May 8, 1988) was an Americanscience fictionauthor,aeronautical engineer, andnaval officer. Sometimes called the "dean of science fiction writers",[5]he was among the first to emphasize scientific accuracy in his fiction, and was thus a pioneer of the subgenre ofhard science fiction. His published works, both fiction and non-fiction, express admiration for competence and emphasize the value ofcritical thinking.[6]His plots often posed provocative situations which challenged conventionalsocial mores.[7]His work continues to have an influence on the science-fiction genre, and on modern culture more generally.
Heinlein became one of the first American science-fiction writers to break into mainstream magazines such asThe Saturday Evening Postin the late 1940s. He was one of the best-selling science-fiction novelists for many decades, and he,Isaac Asimov, andArthur C. Clarkeare often considered the "Big Three" ofEnglish-languagescience fiction authors.[8][9][10]Notable Heinlein works includeStranger in a Strange Land,[11]Starship Troopers(which helped mold thespace marineandmechaarchetypes) andThe Moon Is a Harsh Mistress.[12]His work sometimes had controversial aspects, such asplural marriageinThe Moon Is a Harsh Mistress,militarisminStarship Troopersand technologically competent women characters who were formidable,[13]yet often stereotypically feminine—such asFriday.
Heinlein used his science fiction as a way to explore provocative social and political ideas and to speculate how progress in science and engineering might shape the future of politics, race, religion, and sex.
Within the framework of his science-fiction stories, Heinlein repeatedly addressed certain social themes: the importance of individuallibertyandself-reliance, the nature of sexual relationships, the obligation individuals owe to their societies, the influence oforganized religionon culture and government, and the tendency of society to repressnonconformistthought. He also speculated on the influence of space travel on human cultural practices.
Heinlein was heavily influenced by the visionary writers and philosophers of his day. William H. Patterson Jr., writing inRobert A. Heinlein: In Dialogue with His Century, states that by 1930, Heinlein was a progressive liberal who had spent some time in the open sexuality climate ofNew York'sJazz AgeGreenwich Village. Heinlein believed that some level of socialism was inevitable and was already occurring in America. He was absorbing the social concepts of writers such asH. G. WellsandUpton Sinclair. He adopted many of the progressive social beliefs of his day and projected them forward.[14]In later years, he began to espouseconservativeviews and to believe that a strongworld governmentwas the only way to avoidmutual nuclear annihilation.[15]
Heinlein was named the firstScience Fiction Writers Grand Masterin 1974.[16]Four of his novels wonHugo Awards. In addition, fifty years after publication, seven of his works were awarded "Retro Hugos"—awards given retrospectively for works that were published before the Hugo Awards came into existence.[17]In his fiction, Heinlein coined terms that have become part of the English language, includinggrok,waldoandspeculative fiction, as well as popularizing existing terms like "TANSTAAFL", "pay it forward", and "space marine". He also anticipated mechanicalcomputer-aided designwith "Drafting Dan" in his novelThe Door into Summerand described a modern version of awaterbedin his novelStranger in a Strange Land.
Heinlein, born on July 7, 1907, to Rex Ivar Heinlein (an accountant) and Bam Lyle Heinlein, inButler, Missouri, was the third of seven children. He was a sixth-generationGerman-American; a family tradition had it that Heinleins fought in every American war, starting with theWar of Independence.[18]
He spent his childhood inKansas City, Missouri.[19]The outlook and values of this time and place (in his own words, "TheBible Belt") had an influence on his fiction, especially in his later works, as he drew heavily upon his childhood in establishing the setting and cultural atmosphere in works likeTime Enough for LoveandTo Sail Beyond the Sunset.[citation needed]The 1910 appearance ofHalley's Cometinspired the young child's life-long interest in astronomy.[20]
In January 1924, the sixteen year old Heinlein lied about his age to enlist in Company C,110th Engineer Regiment, of theMissouri National Guard, in Kansas City. His family could not afford to send Heinlein to college, so he sought an appointment to a military academy.[21]When Heinlein graduated fromKansas City Central High Schoolin 1924, he was initially prevented from attending theUnited States Naval Academyat Annapolis because his older brother Rex was a student there, and at the time, regulations discouraged multiple family members from attending the academy simultaneously.[citation needed]He instead matriculated atKansas City Community Collegeand began vigorously petitioning Missouri SenatorJames A. Reedfor an appointment to the Naval Academy. In part due to the influence of thePendergast machine, the Naval Academy admitted him in June 1925.[12]Heinlein received his discharge from the Missouri National Guard as a staff sergeant. Reed later told Heinlein that he had received 100 letters of recommendation for nomination to the Naval Academy, 50 for other candidates and 50 for Heinlein.[21]
Heinlein's experience in theU.S. Navyexerted a strong influence on his character and writing. In 1929, he graduated from the Naval Academy with the equivalent of abachelor of artsin engineering.[22](At that time, the Academy did not confer degrees.) He ranked fifth in his class academically but with a class standing of 20th of 243 due to disciplinary demerits. The U.S. Navy commissioned him as an ensign shortly after his graduation. He advanced to lieutenant junior grade in 1931 while serving aboard the newaircraft carrierUSSLexington, where he worked inradio communications—a technology then still in its earlier stages. Thecaptainof this carrier,Ernest J. King, later served as theChief of Naval OperationsandCommander-in-Chief, U.S. FleetduringWorld War II. Military historians frequently[quantify]interviewed Heinlein during his later years and asked him about Captain King and his service as the commander of the U.S. Navy's first modern aircraft carrier. Heinlein also served as gunnery officer aboard thedestroyerUSSRoperin 1933 and 1934, reaching the rank of lieutenant.[23]His brother, Lawrence Heinlein, served in the U.S. Army, the U.S. Air Force, and theMissouri National Guard, reaching the rank ofmajor generalin the National Guard.[24]
In 1929, Heinlein married Elinor Curry of Kansas City.[25]However, their marriage lasted only about one year.[3]His second marriage, to Leslyn MacDonald (1904–1981) in 1932, lasted 15 years. MacDonald was, according to the testimony of Heinlein's Navy friend,Rear AdmiralCal Laning, "astonishingly intelligent, widely read, and extremely liberal, though a registeredRepublican",[26]while Isaac Asimov later recalled that Heinlein was, at the time, "a flamingliberal".[27](See section:Politics of Robert Heinlein.)
At thePhiladelphia Naval Shipyard, Heinlein met and befriended achemical engineernamedVirginia "Ginny" Gerstenfeld. After the war, her engagement having fallen through, she attendedUCLAfor doctoral studies inchemistry, and while there reconnected with Heinlein. As his second wife'salcoholismgradually spun out of control,[28]Heinlein moved out and the couple filed for divorce. Heinlein's friendship with Virginia turned into a relationship and on October 21, 1948—shortly after thedecree nisicame through—they married in the town ofRaton, New Mexico. Soon thereafter, they set up housekeeping in the Broadmoor district ofColorado Springs, Colorado, in a house that Heinlein and his wife designed. As the area was newly developed, they were allowed to choose their own house number, 1776 Mesa Avenue.[29]The design of the house was featured inPopular Mechanics.[30]They remained married until Heinlein's death. In 1965, after various chronic health problems of Virginia's were traced back toaltitude sickness, they moved toSanta Cruz, California, which is atsea level. Robert and Virginia designed and built a new residence, circular in shape, in the adjacent village ofBonny Doon.[31][32]
Ginny undoubtedly served as a model for many of his intelligent, fiercely independent female characters.[33][34]She was a chemist androcket test engineer, and held a higher rank in the Navy than Heinlein himself. She was also an accomplished college athlete, earning fourvarsity letters.[1]In 1953–1954, the Heinleins voyaged around the world (mostly viaocean linersandcargo liners, as Ginny detested flying), which Heinlein described inTramp Royale.The trip provided background material for science fiction novels set aboard spaceships on long voyages, such asPodkayne of Mars,FridayandJob: A Comedy of Justice, the latter initially being set on a cruise much as detailed inTramp Royale. Ginny acted as the first reader of hismanuscripts. Isaac Asimov believed that Heinlein made a swing to therightpolitically at the same time he married Ginny.
In 1934, Heinlein was discharged from the Navy, owing topulmonary tuberculosis. During a lengthy hospitalization, and inspired by his own experience while bed-ridden, he developed a design for awaterbed.[35]
After his discharge, Heinlein attended a few weeks of graduate classes inmathematicsandphysicsat theUniversity of California, Los Angeles(UCLA), but he soon quit, either because of his ill-health or because of a desire to enter politics.[36]
Heinlein supported himself at several occupations, includingreal estate salesandsilver mining, but for some years found money in short supply. Heinlein was active inUpton Sinclair's socialistEnd Poverty in California movement(EPIC) in the early 1930s. He was deputy publisher of theEPIC News, which Heinlein noted "recalled a mayor, kicked out a district attorney, replaced the governor with one of our choice."[37]When Sinclair gained theDemocraticnomination forGovernor of Californiain 1934, Heinlein worked actively in the campaign. Heinlein himself ran for theCalifornia State Assemblyin 1938, but was unsuccessful. Heinlein was running as a left-wing Democrat in a conservative district, and he never made it past the Democratic primary.[38]
While not destitute after the campaign—he had a small disability pension from the Navy—Heinlein turned to writing to pay off his mortgage. His first published story, "Life-Line", was printed in the August 1939 issue ofAstounding Science Fiction.[39]Originally written for a contest, it sold toAstoundingfor significantly more than the contest's first-prize payoff. AnotherFuture Historystory, "Misfit", followed in November.[39]Some saw Heinlein's talent and stardom from his first story,[40]and he was quickly acknowledged as a leader of the new movement toward"social" science fiction. In California he hosted theMañana Literary Society, a 1940–41 series of informal gatherings of new authors.[41]He was the guest of honor at Denvention, the 1941Worldcon, held in Denver. DuringWorld War II, Heinlein was employed by the Navy as a civilian aeronautical engineer at the Navy Aircraft Materials Center at thePhiladelphia Naval ShipyardinPennsylvania.[42]Heinlein recruitedIsaac AsimovandL. Sprague de Campto also work there.[35]While at the Philadelphia Naval Shipyards, Asimov, Heinlein, and de Camp brainstormed unconventional approaches to kamikaze attacks, such as using sound to detect approaching planes.[43]
As the war wound down in 1945, Heinlein began to re-evaluate his career. Theatomic bombings of Hiroshima and Nagasaki, along with the outbreak of theCold War, galvanized him to write nonfiction on political topics. In addition, he wanted to break into better-paying markets. He published four influentialshort storiesforThe Saturday Evening Postmagazine, leading off, in February 1947, with "The Green Hills of Earth". That made him the first science fiction writer to break out of the "pulp ghetto". In 1950, the movieDestination Moon—the documentary-like film for which he had written the story and scenario, co-written the script, and invented many of the effects—won anAcademy Awardforspecial effects.
Heinlein created SF stories with social commentary about relationships. InThe Puppet Masters, a 1951 alien invasion novel, the point of view character Sam persuades fellow operative Mary to marry him. When they go to the county clerk, they are offered a variety of marriage possibilities; “Term, renewable or lifetime”, as short as six months or as long as forever.[44]
Also, he embarked on a series ofjuvenile novelsfor theCharles Scribner's Sonspublishing company that went from 1947 through 1959, at the rate of one book each autumn, in time forChristmaspresents to teenagers. He also wrote forBoys' Lifein 1952.
Heinlein used topical materials throughout hisjuvenile seriesbeginning in 1947, but in 1958 he interrupted work onThe Heretic(the working title ofStranger in a Strange Land) to write and publish a book exploring ideas of civic virtue, initially serialized asStarship Soldiers. In 1959, his novel (now entitledStarship Troopers) was considered by the editors and owners of Scribner's to be too controversial for one of its prestige lines, and it was rejected.[45]Heinlein found another publisher (Putnam), feeling himself released from the constraints of writing novels for children. He had told an interviewer that he did not want to do stories that merely added to categories defined by other works. Rather he wanted to do his own work, stating that: "I want to do my own stuff, my own way".[46]He would go on to write a series of challenging books that redrew the boundaries of science fiction, includingStranger in a Strange Land(1961) andThe Moon Is a Harsh Mistress(1966).
Beginning in 1970, Heinlein had a series of health crises, broken by strenuous periods of activity in his hobby ofstonemasonry: in a private correspondence, he referred to that as his "usual and favorite occupation between books".[47]The decade began with a life-threatening attack ofperitonitis, recovery from which required more than two years, and treatment of which required multiple transfusions of Heinlein'srare blood type, A2 negative.[citation needed]As soon as he was well enough to write again, he began work onTime Enough for Love(1973), which introduced many of the themes found in his later fiction.
In the mid-1970s, Heinlein wrote two articles for theBritannicaCompton Yearbook.[48]He and Ginny crisscrossed the country helping to reorganizeblood donationin the United States in an effort to assist the system which had saved his life.[citation needed]At science fiction conventions to receive his autograph, fans would be asked to co-sign with Heinlein a beautifully embellished pledge form he supplied stating that the recipient agrees that they willdonate blood. He was the guest of honor at the Worldcon in 1976 for the third time atMidAmeriConinKansas City, Missouri. At that Worldcon, Heinlein hosted a blood drive and donors' reception to thank all those who had helped save lives.
Beginning in 1977, and including an episode while vacationing inTahitiin early 1978, he had episodes of reversible neurologic dysfunction due totransient ischemic attacks.[49]Over the next few months, he became more and more exhausted, and his health again began to decline. The problem was determined to bea blocked carotid artery, and he had one of the earliest known carotid bypass operations to correct it.
In 1980, Robert Heinlein was a member of theCitizen's Advisory Council on National Space Policy, chaired byJerry Pournelle, which met at the home of SF writerLarry Nivento write space policy papers for the incomingReagan administration. Members included such aerospace industry leaders as former astronautBuzz Aldrin, GeneralDaniel O. Graham,aerospace engineerMax HunterandNorth American RockwellVP for Space Shuttle development George Merrick. Policy recommendations from the Council included ballistic missile defense concepts which were later transformed into what was called theStrategic Defense Initiative. Heinlein assisted with Council contribution to the Reagan SDI spring 1983 speech. Asked to appear before aJoint Committee of the United States Congressthat year, he testified on his belief thatspin-offsfromspace technologywere benefiting the infirm and the elderly.
Heinlein's surgical treatment re-energized him, and he wrote five novels from 1980 until he died in his sleep fromemphysemaandheart failureon May 8, 1988.
In 1995,Spider Robinsonwrote the novelVariable Starbased on an outline and notes created by Heinlein.[50]Heinlein's posthumously published nonfiction includes a selection of correspondence and notes edited into a somewhat autobiographical examination of his career, published in 1989 under the titleGrumbles from the Graveby his wife, Virginia; his book on practical politics written in 1946 and published asTake Back Your Governmentin 1992; and a travelogue of their first around-the-world tour in 1954,Tramp Royale. The novelPodkayne of Mars,which had been edited against Heinlein's wishes in their original release, was reissued with the original ending.Stranger In a Strange Landwas originally published in a shorter form, but both the long and short versions are now simultaneously available in print.
Heinlein's archive is housed by the Special Collections department ofMcHenry Libraryat theUniversity of California at Santa Cruz. The collection includes manuscript drafts, correspondence, photographs and artifacts. A substantial portion of the archive has been digitized and it is available online through the Robert A. and Virginia Heinlein Archives.[51]
Heinlein published 32 novels, 59 short stories, and 16 collections during his life. Nine films, two television series, several episodes of a radio series, and a board game have been derived more or less directly from his work. He wrote a screenplay for one of the films. Heinlein edited an anthology of other writers' SF short stories.
Three nonfiction books and two poems have been published posthumously.For Us, the Living: A Comedy of Customswas published posthumously in 2003;[52]Variable Star, written by Spider Robinson based on an extensive outline by Heinlein, was published in September 2006. Four collections have been published posthumously.[39]
Heinlein began his career as a writer of stories forAstounding Science Fictionmagazine, which was edited by John Campbell. The science fiction writerFrederik Pohlhas described Heinlein as "that greatest of Campbell-era sf writers".[53]Isaac Asimov said that, from the time of his first story, the science fiction world accepted that Heinlein was the best science fiction writer in existence, adding that he would hold this title through his lifetime.[54]
Alexei and Cory Panshin noted that Heinlein's impact was immediately felt. In 1940, the year after selling 'Life-Line' to Campbell, he wrote three short novels, four novelettes, and seven short stories. They went on to say that "No one ever dominated the science fiction field as Bob did in the first few years of his career."[55]Alexei expresses awe in Heinlein's ability to show readers a world so drastically different from the one we live in now, yet have so many similarities. He says that "We find ourselves not only in a world other than our own, but identifying with a living, breathing individual who is operating within its context, and thinking and acting according to its terms."[56]
The first novel that Heinlein wrote,For Us, the Living: A Comedy of Customs(1939), did not see print during his lifetime, but Robert James tracked down the manuscript and it was published in 2003. Though some regard it as a failure as a novel,[19]considering it little more than a disguised lecture on Heinlein'ssocial theories, some readers took a very different view. In a review of it,John Clutewrote:
I'm not about to suggest that if Heinlein had been able to publish [such works] openly in the pages ofAstoundingin 1939, SF would have gotten the future right; I would suggest, however, that if Heinlein, and his colleagues, had been able to publish adult SF inAstoundingand its fellow journals, then SF might not have done such a grotesquely poor job of prefiguring something of the flavor of actually living here at the onset of 2004.[57]
For Us, the Livingwas intriguing as a window into the development of Heinlein's radical ideas about man as asocial animal, including his interest infree love. The root of many themes found in his later stories can be found in this book. It also contained a large amount of material that could be considered background for his other novels. This included a detailed description of the protagonist's treatment to avoid being banished toCoventry(a lawless land in the Heinlein mythos where unrepentant law-breakers are exiled).[58]
It appears that Heinlein at least attempted to live in a manner consistent with these ideals, even in the 1930s, and had anopen relationshipin his marriage to his second wife, Leslyn. He was also anudist;[3]nudism and bodytaboosare frequently discussed in his work. At the height of theCold War, he built abomb shelterunder his house, like the one featured inFarnham's Freehold.[3]
AfterFor Us, the Living, Heinlein began selling (to magazines) first short stories, then novels, set in aFuture History, complete with a time line of significant political, cultural, and technological changes. A chart of the future history was published in the May 1941 issue ofAstounding. Over time, Heinlein wrote many novels and short stories that deviated freely from the Future History on some points, while maintaining consistency in some other areas. The Future History was eventually overtaken by actual events. These discrepancies were explained, after a fashion, in his later World as Myth stories.
Heinlein's first novel published as a book,Rocket Ship Galileo, was initially rejected because going to the Moon was considered too far-fetched, but he soon found a publisher,Scribner's, that began publishing a Heinleinjuvenileonce a year for the Christmas season.[59]Eight of these books were illustrated byClifford Gearyin a distinctive white-on-blackscratchboardstyle.[60]Some representative novels of this type areHave Space Suit—Will Travel,Farmer in the Sky, andStarman Jones. Many of these were first published in serial form under other titles, e.g.,Farmer in the Skywas published asSatellite Scoutin theBoy ScoutmagazineBoys' Life. There has been speculation that Heinlein's intense obsession with his privacy was due at least in part to the apparent contradiction between his unconventional private life[clarification needed]and his career as an author of books for children. However,For Us, the Livingexplicitly discusses the political importance Heinlein attached to privacy as a matter of principle.[63]
The novels that Heinlein wrote for a young audience are commonly called "the Heinlein juveniles", and they feature a mixture of adolescent and adult themes. Many of the issues that he takes on in these books have to do with the kinds of problems that adolescents experience. His protagonists are usually intelligent teenagers who have to make their way in the adult society they see around them. On the surface, they are simple tales of adventure, achievement, and dealing with stupid teachers and jealous peers. Heinlein was a vocal proponent of the notion that juvenile readers were far more sophisticated and able to handle more complex or difficult themes than most people realized. His juvenile stories often had a maturity to them that made them readable for adults.Red Planet, for example, portrays some subversive themes, including a revolution in which young students are involved; his editor demanded substantial changes in this book's discussion of topics such as the use of weapons by children and the misidentified sex of the Martian character. Heinlein was always aware of the editorial limitations put in place by the editors of his novels and stories, and while he observed those restrictions on the surface, was often successful in introducing ideas not often seen in other authors' juvenile SF.
In 1957,James Blishwrote that one reason for Heinlein's success "has been the high grade of machinery which goes, today as always, into his story-telling. Heinlein seems to have known from the beginning, as if instinctively, technical lessons about fiction which other writers must learn the hard way (or often enough, never learn). He does not always operate the machinery to the best advantage, but he always seems to be aware of it."[64]
Heinlein decisively ended his juvenile novels withStarship Troopers(1959), a controversial work and his personal riposte to leftists calling for PresidentDwight D. Eisenhowerto stop nuclear testing in 1958. "The 'Patrick Henry' ad shocked 'em", he wrote many years later of the campaign. "Starship Troopersoutraged 'em."[65]Starship Troopersis a coming-of-age story about duty, citizenship, and the role of the military in society.[66]The book portrays a society in whichsuffrageis earned by demonstrated willingness to place society's interests before one's own, at least for a short time and often under onerous circumstances, in government service; in the case of the protagonist, this was military service.
Later, inExpanded Universe, Heinlein said that it was his intention in the novel that service could include positions outside strictly military functions such as teachers, police officers, and other government positions. This is presented in the novel as an outgrowth of the failure of unearned suffrage government and as a very successful arrangement. In addition, the franchise was only awarded after leaving the assigned service; thus those serving their terms—in the military, or any other service—were excluded from exercising any franchise. Career military were completely disenfranchised until retirement.
From about 1961 (Stranger in a Strange Land) to 1973 (Time Enough for Love), Heinlein explored some of his most important themes, such asindividualism,libertarianism, and free expression of physical and emotional love. Three novels from this period,Stranger in a Strange Land,The Moon Is a Harsh Mistress, andTime Enough for Love, won theLibertarian Futurist Society'sPrometheus Hall of Fame Award, designed to honor classic libertarian fiction.[67]Jeff Riggenbach describedThe Moon Is a Harsh Mistressas "unquestionably one of the three or four most influential libertarian novels of the last century".[68]
Heinlein did not publishStranger in a Strange Landuntil some time after it was written, and the themes of free love and radicalindividualismare prominently featured in his long-unpublished first novel,For Us, the Living: A Comedy of Customs.
The Moon Is a Harsh Mistresstells of a war of independence waged by the Lunar penal colonies, with significant comments from a major character, Professor La Paz, regarding the threat posed by government to individual freedom.
Although Heinlein had previously written a few short stories in thefantasygenre, during this period he wrote his first fantasy novel,Glory Road. InStranger in a Strange LandandI Will Fear No Evil, he began to mix hard science with fantasy, mysticism, and satire of organized religion. Critics William H. Patterson, Jr., and Andrew Thornton believe that this is simply an expression of Heinlein's longstanding philosophical opposition topositivism.[69]Heinlein stated that he was influenced byJames Branch Cabellin taking this new literary direction. The penultimate novel of this period,I Will Fear No Evil, is according to critic James Gifford "almost universally regarded as a literary failure"[70]and he attributes its shortcomings to Heinlein's near-death fromperitonitis.
After a seven-year hiatus brought on by poor health, Heinlein produced five new novels in the period from 1980 (The Number of the Beast) to 1987 (To Sail Beyond the Sunset). These books have a thread of common characters and time and place. They most explicitly communicated Heinlein's philosophies and beliefs, and many long, didactic passages of dialog and exposition deal with government, sex, and religion. These novels are controversial among his readers and one critic,David Langford, has written about them very negatively.[71]Heinlein's four Hugo awards were all for books written before this period.
Most of the novels from this period are recognized by critics as forming an offshoot from the Future History series and are referred to by the termWorld as Myth.[72]
The tendency toward authorial self-reference begun inStranger in a Strange LandandTime Enough for Lovebecomes even more evident in novels such asThe Cat Who Walks Through Walls, whose first-person protagonist is a disabled military veteran who becomes a writer, and finds love with a female character.[73]
The 1982 novelFriday, a more conventional adventure story (borrowing a character and backstory from the earlier short storyGulf, also containing suggestions of connection toThe Puppet Masters) continued a Heinlein theme of expecting what he saw as the continued disintegration of Earth's society, to the point where the title character is strongly encouraged to seek a new life off-planet. It concludes with a traditional Heinlein note, as inThe Moon Is a Harsh MistressorTime Enough for Love, that freedom is to be found on the frontiers.
The 1984 novelJob: A Comedy of Justiceis a sharp satire of organized religion. Heinlein himself was agnostic.[74][75]
Several Heinlein works have been published since his death, including the aforementionedFor Us, the Livingas well as 1989'sGrumbles from the Grave, a collection of letters between Heinlein and his editors and agent; 1992'sTramp Royale, a travelogue of a southern hemisphere tour the Heinleins took in the 1950s;Take Back Your Government, a how-to book about participatory democracy written in 1946 and reflecting his experience as an organizer with theEPIC campaign of 1934and the movement's aftermath as an important factor in California politics before the Second World War; and a tribute volume calledRequiem: Collected Works and Tributes to the Grand Master, containing some additional short works previously unpublished in book form.Off the Main Sequence, published in 2005, includes three short stories never before collected in any Heinlein book (Heinlein called them "stinkeroos").
Spider Robinson, a colleague, friend, and admirer of Heinlein,[76]wroteVariable Star, based on an outline and notes for a novel that Heinlein prepared in 1955. The novel was published as a collaboration, with Heinlein's name above Robinson's on the cover, in 2006.
A complete collection of Heinlein's published work has been published[77]by the Heinlein Prize Trust as the "Virginia Edition", after his wife. See the Complete Works section ofRobert A. Heinlein bibliographyfor details.
On February 1, 2019, Phoenix Pick announced that through a collaboration with the Heinlein Prize Trust, a reconstruction of the full text of an unpublished Heinlein novel had been produced. It was published in March 2020. The reconstructed novel, entitledThe Pursuit of the Pankera: A Parallel Novel about Parallel Universes,[78]is an alternative version ofThe Number of the Beast, with the first one-third ofThe Pursuit of the Pankeramostly the same as the first one-third ofThe Number of the Beastbut the remainder ofThe Pursuit of the Pankeradeviating entirely fromThe Number of the Beast, with a completely different story-line. The newly reconstructed novel pays homage toEdgar Rice BurroughsandE. E. "Doc" Smith. It was edited byPatrick Lobrutto. Some reviewers describe the newly reconstructed novel as more in line with the style of a traditional Heinlein novel than wasThe Number of the Beast.[79]The Pursuit of the Pankerawas considered superior to the original version ofThe Number of the Beastby some reviewers.[80]BothThe Pursuit of the Pankeraand a new edition ofThe Number of the Beast[81]were published in March 2020. The new edition of the latter shares the subtitle ofThe Pursuit of the Pankera, hence entitledThe Number of the Beast: A Parallel Novel about Parallel Universes.[82][83]
Heinlein contributed to the final draft of the script forDestination Moon(1950) and served as a technical adviser for the film.[84]Heinlein also shared screenwriting credit forProject Moonbase(1953).
The primary influence on Heinlein's writing style may have beenRudyard Kipling. Kipling is the first known modern example of "indirect exposition", a writing technique for which Heinlein later became famous.[85]In his famous text on "On the Writing of Speculative Fiction", Heinlein quotes Kipling:
There are nine-and-sixty waysOf constructing tribal laysAnd every single one of them is right
Stranger in a Strange Landoriginated as a modernized version of Kipling'sThe Jungle Book. His wife suggested that the child be raised by Martians instead of wolves. Likewise,Citizen of the Galaxycan be seen as a reboot of Kipling's novelKim.[86]
TheStarship Troopersidea of needing to serve in the military in order to vote can be found in Kipling's "The Army of a Dream":
But as a little detail we never mention, if we don't volunteer in some corps or other—as combatants if we're fit, as non-combatants if we ain't—till we're thirty-five—we don't vote, and we don't get poor-relief, and the women don't love us.
Poul Anderson once said of Kipling's science fiction story "As Easy as A.B.C.", "a wonderful science fiction yarn, showing the same eye for detail that would later distinguish the work of Robert Heinlein".
Heinlein described himself as also being influenced byGeorge Bernard Shaw, having read most of his plays.[87]Shaw is an example of an earlier author who used thecompetent man, a favorite Heinlein archetype.[88]He denied, though, any direct influence ofBack to MethuselahonMethuselah's Children.
Heinlein's books probe a range of ideas about a range of topics such as sexuality, race, politics, and the military. Many were seen as radical or as ahead of their time in their social criticism. His books have inspired considerable debate about the specifics, and the evolution, of Heinlein's own opinions, and have earned him both lavish praise and a degree of criticism. He has also been accused of contradicting himself on various philosophical questions.[89]
Brian Dohertycites William Patterson, saying that the best way to gain an understanding of Heinlein is as a "full-service iconoclast, the unique individual who decides that things do not have to be, and won't continue, as they are". He says this vision is "at the heart of Heinlein, science fiction, libertarianism, and America. Heinlein imagined how everything about the human world, from our sexual mores to our religion to our automobiles to our government to our plans for cultural survival, might be flawed, even fatally so."[90]
The criticElizabeth Anne Hull, for her part, has praised Heinlein for his interest in exploring fundamental life questions, especially questions about "political power—our responsibilities to one another" and about "personal freedom, particularly sexual freedom".[91]
Edward R. Murrowhosted a series onCBS RadiocalledThis I Believe, which solicited an entry from Heinlein in 1952. Titled "Our Noble, Essential Decency". In it, Heinlein broke with the normal trends, stating that he believed in his neighbors (some of whom he named and described), community, and towns across America that share the same sense of good will and intentions as his own, going on to apply this same philosophy to the US, and humanity in general.
I believe in my fellow citizens. Our headlines are splashed with crime. Yet for every criminal, there are ten thousand honest, decent, kindly men. If it were not so, no child would live to grow up. Business could not go on from day to day. Decency is not news. It is buried in the obituaries, but it is a force stronger than crime.
Heinlein's political positions shifted throughout his life. Heinlein's early political leanings wereliberal.[92]In 1934, he worked actively for theDemocraticcampaign ofUpton SinclairforGovernor of California. After Sinclair lost, Heinlein became an anti-communist Democratic activist. He made an unsuccessful bid for aCalifornia State Assemblyseat in 1938.[92]Heinlein's first novel,For Us, the Living(written 1939), consists largely of speeches advocating theSocial Creditphilosophy, and the early story "Misfit" (1939) deals with an organization—"The Cosmic Construction Corps"—that seems to beFranklin D. Roosevelt'sCivilian Conservation Corpstranslated into outer space.[93]
Of this time in his life, Heinlein later said:
At the time I wroteMethuselah's ChildrenI was still politically quite naïve and still had hopes that various libertarian notions could be put over by political processes... It [now] seems to me that every time we manage to establish one freedom, they take another one away. Maybe two. And that seems to me characteristic of a society as it gets older, and more crowded, and higher taxes, and more laws.[87]
Heinlein's fiction of the 1940s and 1950s, however, began to espouseconservativeviews. After 1945, he came to believe that a strongworld governmentwas the only way to avoidmutual nuclear annihilation.[94]His 1949 novelSpace Cadetdescribes a future scenario where a military-controlled global government enforces world peace. Heinlein ceased considering himself a Democrat in 1954.[92]
The Heinleins formed thePatrick Henry Leaguein 1958, and they worked in the 1964Barry Goldwaterpresidential campaign.[27]
When Robert A. Heinlein opened hisColorado Springsnewspaper on April 5, 1958, he read a full-page ad demanding that the Eisenhower Administration stop testing nuclear weapons. The science fiction author was flabbergasted. He called for the formation of the Patrick Henry League and spent the next several weeks writing and publishing his own polemic that lambasted "Communist-line goals concealed in idealistic-sounding nonsense" and urged Americans not to become "soft-headed".[65]
Heinlein's response ad was entitled "Who Are the Heirs of Patrick Henry?". It started with the famous Henry quotation: "Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God! I know not what course others may take, but as for me, give me liberty, or give me death!!" It then went on to admit that there was some risk to nuclear testing (albeit less than the "willfully distorted" claims of the test ban advocates), and risk of nuclear war, but that "The alternative is surrender. We accept the risks." Heinlein was among those who in 1968 signed a pro–Vietnam Warad inGalaxy Science Fiction.[95]
Heinlein always considered himself a libertarian; in a letter to Judith Merril in 1967 (never sent) he said, "As for libertarian, I've been one all my life, a radical one. You might use the term 'philosophical anarchist' or 'autarchist' about me, but 'libertarian' is easier to define and fits well enough."[96]
Stranger in a Strange Landwas embraced by the 1960scounterculture, and libertarians have found inspiration inThe Moon Is a Harsh Mistress. Both groups found resonance with his themes of personal freedom in both thought and action.[68]
Heinlein grew up in the era ofracial segregation in the United Statesand wrote some of his most influential fiction at the height of theCivil Rights Movement. He explicitly made the case for using his fiction not only to predict the future but also to educate his readers about the value ofracial equalityand the importance of racial tolerance.[97]His early novels were ahead of their time both in their explicit rejection of racism and in their inclusion of protagonists of color. In the context of science fiction before the 1960s, the mere existence of characters of color was a remarkable novelty, with green occurring more often than brown.[98]For example, his 1948 novelSpace Cadetexplicitly uses aliens as a metaphor for minorities. The 1947 story "Jerry Was a Man" uses enslaved genetically modified chimpanzees as a symbol for Black Americans fighting for civil rights.[99]In his novelThe Star Beast, thede factoforeign minister of the Terran government is an undersecretary, a Mr. Kiku, who is from Africa.[100]Heinlein explicitly states his skin is "ebony black" and that Kiku is in anarranged marriagethat is happy.[101]
In a number of his stories, Heinlein challenges his readers' possible racial preconceptions by introducing a strong, sympathetic character, only to reveal much later that he or she is of African or other ancestry. In several cases, the covers of the books show characters as being light-skinned when the text states or at least implies that they are dark-skinned or of African ancestry.[104]Heinlein repeatedly denounced racism in his nonfiction works, including numerous examples inExpanded Universe.
Heinlein reveals inStarship Troopersthat the novel's protagonist and narrator,Johnny Rico, the formerly disaffected scion of a wealthy family, isFilipino, actually named "Juan Rico" and speaksTagalogin addition to English.
Race was a central theme in some of Heinlein's fiction. The most prominent example isFarnham's Freehold, which casts awhitefamily into a future in which white people are the slaves of cannibalistic black rulers. In the 1941 novelSixth Column(also known asThe Day After Tomorrow), a white resistance movement in the United States defends itself against an invasion by an Asian fascist state (the "Pan-Asians") using a "super-science" technology that allows ray weapons to be tuned to specific races. The idea for the story was pushed on Heinlein by editorJohn W. Campbelland the story itself was based on a then-unpublished story by Campbell, and Heinlein wrote later that he had "had to re-slant it to remove racist aspects of the original story line" and that he did not "consider it to be an artistic success".[105][106]However, the novel prompted a heated debate in the scientific community regarding the plausibility of developingethnic bioweapons.[107]John Hickman, writing in theEuropean Journal of American Studies, identifies examples of anti–East Asian racism in some of Heinlein's works, particularlySixth Column.[108]
Heinlein summed up his attitude toward people of any race in his essay "Our Noble, Essential Decency" thus:
And finally, I believe in my whole race—yellow, white, black, red, brown—in the honesty, courage, intelligence, durability, and goodness of the overwhelming majority of my brothers and sisters everywhere on this planet. I am proud to be a human being.
In keeping with his belief inindividualism, his work for adults—and sometimes even his work for juveniles—often portrays both the oppressors and the oppressed with considerable ambiguity. Heinlein believed that individualism was incompatible with ignorance. He believed that an appropriate level of adult competence was achieved through a wide-ranging education, whether this occurred in a classroom or not. In his juvenile novels, more than once a character looks with disdain at a student's choice of classwork, saying, "Why didn't you study something useful?"[109]InTime Enough for Love,Lazarus Longgives a long list of capabilities that anyone should have, concluding, "Specialization is for insects." The ability of the individual to create himself is explored in stories such asI Will Fear No Evil, "'—All You Zombies—'", and "By His Bootstraps".
Heinlein claimed to have writtenStarship Troopersin response to "calls for the unilateral ending of nuclear testing by the United States".[110]Heinlein suggests in the book that the Bugs are a good example of Communism being something that humans cannot successfully adhere to, since humans are strongly defined individuals, whereas the Bugs, being a collective, can all contribute to the whole without consideration of individual desire.[111]
A common theme in Heinlein's writing is his frequent use of the "competent man", astock characterwho exhibits a very wide range of abilities and knowledge, making him a form ofpolymath. This trope was notably common in 1950s U.S. science fiction.[112]While Heinlein was not the first to use such a character type, the heroes and heroines of his fiction (withJubal Harshawbeing a prime example) generally have a wide range of abilities, and one of Heinlein's characters,Lazarus Long, gives a wide summary of requirements:
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
Predecessors of Heinlein's competent heroes include the protagonists ofGeorge Bernard Shaw, like Henry Higgins inPygmalionand Caesar inCaesar and Cleopatra, as well as the citizen soldiers inRudyard Kipling's "The Army of a Dream".
For Heinlein, personal liberation includedsexual liberation, andfree lovewas a major subject of his writing starting in 1939, withFor Us, the Living. During his early period, Heinlein's writing for younger readers needed to take account of both editorial perceptions of sexuality in his novels, and potential perceptions among the buying public; as critic William H. Patterson has put it, his dilemma was "to sort out what was really objectionable from what was only excessive over-sensitivity to imaginary librarians".[115]
By his middle period, sexual freedom and the elimination of sexual jealousy became a major theme; for instance, inStranger in a Strange Land(1961), the progressively minded but sexually conservative reporter, Ben Caxton, acts as adramatic foilfor the less parochial characters,Jubal Harshawand Valentine Michael Smith (Mike). Another of the main characters, Jill, is homophobic, and says that "nine times out of ten, if a girl gets raped it's partly her own fault."[116]
According to Gary Westfahl,
Heinlein is a problematic case for feminists; on the one hand, his works often feature strong female characters and vigorous statements that women are equal to or even superior to men; but these characters and statements often reflect hopelessly stereotypical attitudes about typical female attributes. It is disconcerting, for example, that inExpanded UniverseHeinlein calls for a society where all lawyers and politicians are women, essentially on the grounds that they possess a mysterious feminine practicality that men cannot duplicate.[117]
In books written as early as 1956, Heinlein dealt with incest and the sexual nature of children. Many of his books includingTime for the Stars,Glory Road,Time Enough for Love, andThe Number of the Beastdealt explicitly or implicitly with incest, sexual feelings and relations between adults, children, or both.[118]The treatment of these themes include the romantic relationship and eventual marriage of two characters inThe Door into Summerwho met when one was a 30-year-old engineer and the other was an 11-year-old girl, and who eventually married when time-travel rendered the girl an adult while the engineer aged minimally, or the more overt intra-familial incest inTo Sail Beyond the SunsetandTime Enough for Love. Heinlein often posed situations where the nominal purpose of sexual taboos was irrelevant to a particular situation, due to future advances in technology. For example, inTime Enough for LoveHeinlein describes a brother and sister (Joe and Llita) who were mirror twins, being complementary diploids with entirely disjoint genomes, and thus not at increased risk for unfavorable gene duplication due toconsanguinity. In this instance, Llita and Joe were props used to explore the concept of incest, where the usual objection to incest—heightened risk of genetic defect in their children—was not a consideration.[119]Peers such asL. Sprague de CampandDamon Knighthave commented critically on Heinlein's portrayal of incest and pedophilia in a lighthearted and even approving manner.[118]Diane Parkin-Speer suggests that Heinlein's intent seems more to provoke the reader and to question sexual norms than to promote any particular sexual agenda.[120]
InTo Sail Beyond the Sunset, Heinlein has the main character,Maureen, state that the purpose ofmetaphysicsis to ask questions: "Why are we here?" "Where are we going after we die?" (and so on); and that you are not allowed to answer the questions.Askingthe questions is the point of metaphysics, butansweringthem is not, because once you answer this kind of question, you cross the line into religion. Maureen does not state a reason for this; she simply remarks that such questions are "beautiful" but lack answers. Maureen's son/lover Lazarus Long makes a related remark inTime Enough for Love. In order for us to answer the "big questions" about the universe, Lazarus states at one point, it would be necessary to standoutsidethe universe.
During the 1930s and 1940s, Heinlein was deeply interested inAlfred Korzybski'sgeneral semanticsand attended a number of seminars on the subject. His views onepistemologyseem to have flowed from that interest, and his fictional characters continue to express Korzybskian views to the very end of his writing career. Many of his stories, such asGulf,If This Goes On—, andStranger in a Strange Land, depend strongly on the premise, related to the well-knownSapir–Whorf hypothesis, that by using a correctlydesigned language, one can change or improve oneself mentally, or even realize untapped potential (as in the case of Joe inGulf—whose last name may be Greene, Gilead or Briggs).[121]
WhenAyn Rand's novelThe Fountainheadwas published, Heinlein was very favorably impressed, as quoted in "Grumbles ..." and mentioned John Galt—the hero in Rand'sAtlas Shrugged—as a heroic archetype inThe Moon Is a Harsh Mistress. He was also strongly affected by the religious philosopherP. D. Ouspensky.[19]Freudianismandpsychoanalysiswere at the height of their influence during the peak of Heinlein's career, and stories such asTime for the Starsindulged in psychological theorizing.
However, he was skeptical about Freudianism, especially after a struggle with an editor who insisted on reading Freudian sexual symbolism into hisjuvenile novels. Heinlein was fascinated by thesocial creditmovement in the 1930s. This is shown inBeyond This Horizonand in his 1938 novelFor Us, the Living: A Comedy of Customs, which was finally published in 2003, long after his death.
On that theme, the phrase "pay it forward", though it was already in occasional use as a quotation, was popularized by Robert A. Heinlein in his bookBetween Planets,[122]published in 1951:
The banker reached into the folds of his gown, pulled out a single credit note. "But eat first—a full belly steadies the judgment. Do me the honor of accepting this as our welcome to the newcomer."
His pride said no; his stomach said YES! Don took it and said, "Uh, thanks! That's awfully kind of you. I'll pay it back, first chance."
"Instead, pay it forward to some other brother who needs it."
He referred to this in a number of other stories, although sometimes just saying to pay a debt back by helping others, as in one of his last works,Job, a Comedy of Justice.
Heinlein was a mentor toRay Bradbury, giving him help and quite possibly passing on the concept, made famous by the publication of a letter from him to Heinlein thanking him.[123]In Bradbury's novelDandelion Wine, published in 1957, when the main character Douglas Spaulding is reflecting on his life being saved by Mr. Jonas, the Junkman:
How do I thank Mr. Jonas, he wondered, for what he's done? How do I thank him, how pay him back? No way, no way at all. You just can't pay. What then? What? Pass it on somehow, he thought, pass it on to someone else. Keep the chain moving. Look around, find someone, and pass it on. That was the only way…
Bradbury has also advised that writers he has helped thank him by helping other writers.[124]
Heinlein both preached and practiced this philosophy; now theHeinlein Society, a humanitarian organization founded in his name, does so, attributing the philosophy to its various efforts, including Heinlein for Heroes, the Heinlein Society Scholarship Program, and Heinlein Society blood drives.[125]Author Spider Robinson made repeated reference to the doctrine, attributing it to his spiritual mentor Heinlein.[126]
Heinlein is usually identified, along withIsaac AsimovandArthur C. Clarke, as one of the three masters of science fiction to arise in the so-calledGolden Age of science fiction, associated withJohn W. Campbelland his magazineAstounding.[127]In the 1950s he was a leader in bringing science fiction out of the low-paying and less prestigious "pulpghetto". Most of his works, including short stories, have been continuously in print in many languages since their initial appearance and are still available as new paperbacks decades after his death.
He was at the top of his form during, and himself helped to initiate, the trend towardsocial science fiction, which went along with a general maturing of the genre away fromspace operato a more literary approach touching on such adult issues as politics andhuman sexuality. In reaction to this trend,hard science fictionbegan to be distinguished as a separate subgenre, but paradoxically Heinlein is also considered a seminal figure in hard science fiction, due to his extensive knowledge of engineering and the careful scientific research demonstrated in his stories. Heinlein himself stated—with obvious pride—that in the days before pocket calculators, he and his wife Virginia once worked for several days on a mathematical equation describing an Earth–Mars rocket orbit, which was then subsumed in a single sentence of the novelSpace Cadet.
Heinlein is often credited with bringing serious writing techniques to the genre of science fiction. For example, when writing about fictional worlds, previous authors were often limited by the reader's existing knowledge of a typical "space opera" setting, leading to a relatively low creativity level: The same starships, death rays, and horrifying rubbery aliens becoming ubiquitous.[citation needed]This was necessary unless the author was willing to go into longexpositionsabout the setting of the story, at a time when the word count was at a premium in SF.[citation needed]
But Heinlein utilized a technique called "indirect exposition", perhaps first introduced byRudyard Kiplingin his own science fiction venture, theAerial Board of Controlstories. Kipling had picked this up during his time inIndia, using it to avoid bogging down his stories set in India with explanations for his English readers.[128]This technique — mentioning details in a way that lets the reader infer more about the universe than is actually spelled out[129]— became a trademark rhetorical technique of both Heinlein and writers influenced by him. Heinlein was significantly influenced by Kipling beyond this, for example quoting him in "On the Writing of Speculative Fiction".[130]
Likewise, Heinlein's name is often associated with thecompetent hero, a character archetype who, though he or she may have flaws and limitations, is a strong, accomplished person able to overcome any soluble problem set in their path. They tend to feel confident overall, have a broad life experience and set of skills, and not give up when the going gets tough. This style influenced not only the writing style of a generation of authors, but even their personal character.Harlan Ellisononce said, "Very early in life when I read Robert Heinlein I got the thread that runs through his stories—the notion of the competent man ... I've always held that as my ideal. I've tried to be a very competent man."[131]
When fellow writers, or fans, wrote Heinlein asking for writing advice, he famously gave out his own list of rules for becoming a successful writer:
About which he said:
The above five rules really have more to do with how to write speculative fiction than anything said above them. But they are amazingly hard to follow—which is why there are so few professional writers and so many aspirants, and which is why I am not afraid to give away the racket![132]
Heinlein later published an entire article, "On the Writing of Speculative Fiction", which included his rules, and from which the above quote is taken. When he says "anything said above them", he refers to his other guidelines. For example, he describes most stories as fitting into one of a handful of basic categories:
In the article, Heinlein proposes that most stories fit into either the gadget story or the human interest story, which is itself subdivided into the three latter categories. He also creditsL. Ron Hubbardas having identified "The Man-Who-Learned-Better".
Heinlein has had a pervasive influence on other science fiction writers. In a 1953 poll of leading science fiction authors, he was cited more frequently as an influence than any other modern writer.[133]Critic James Gifford writes that
Although many other writers have exceeded Heinlein's output, few can claim to match his broad and seminal influence. Scores of science fiction writers from the prewar Golden Age through the present day loudly and enthusiastically credit Heinlein for blazing the trails of their own careers, and shaping their styles and stories.
Heinlein gave Larry Niven and Jerry Pournelle extensive advice on a draft manuscript ofThe Mote in God's Eye.[134]He contributed a cover blurb "Possibly the finest science fiction novel I have ever read." In their novelFootfall, Niven and Pournelle included Robert A. Heinlein as a character under the name "Bob Anson." Anson in the novel is a respected and well-known science-fiction author. WriterDavid Gerrold, responsible for creating the tribbles inStar Trek, also credited Heinlein as the inspiration for hisDingilliadseries of novels.Gregory Benfordrefers to his novelJupiter Projectas a Heinlein tribute. Similarly,Charles Strosssays his Hugo Award-nominated novelSaturn's Childrenis "a space opera and late-period Robert A. Heinlein tribute",[135]referring to Heinlein'sFriday.[136]The theme and plot of Kameron Hurley's novel,The Light Brigadeclearly echo those of Heinlein'sStarship Troopers.[137]
Even outside the science fiction community, several words and phrases coined or adopted by Heinlein have passed into common English usage:
In 1962,Oberon Zell-Ravenheart(then still using his birth name, Tim Zell) founded theChurch of All Worlds, aNeopaganreligious organization modeled in many ways (including its name) after the treatment of religion in the novelStranger in a Strange Land. This spiritual path included several ideas from the book, including non-mainstream family structures, social libertarianism, water-sharing rituals, an acceptance of all religious paths by a single tradition, and the use of several terms such as "grok", "Thou art God", and "Never Thirst". Though Heinlein was neither a member nor a promoter of the Church, there was a frequent exchange of correspondence between Zell and Heinlein, and he was a paid subscriber to their magazine,Green Egg. This Church still exists as a501(C)(3)religious organization incorporated in California, with membership worldwide, and it remains an active part of the neopagan community today.[139]Zell-Ravenheart's wife,Morning Glorycoined the termpolyamoryin 1990,[140]another movement that includes Heinlein concepts among its roots.
Heinlein was influential in makingspace explorationseem to the public more like a practical possibility. His stories in publications such asThe Saturday Evening Posttook a matter-of-fact approach to their outer-space setting, rather than the "gee whiz" tone that had previously been common. The documentary-like filmDestination Moonadvocated aSpace Racewith an unspecified foreign power almost a decade before such an idea became commonplace, and was promoted by an unprecedented publicity campaign in print publications. Many of the astronauts and others working in the U.S. space program grew up on a diet of the Heinleinjuveniles,[original research?]best evidenced by the naming of a crater on Mars after him, and a tribute interspersed by theApollo 15astronauts into their radio conversations while on the moon.[141]
Heinlein was also a guest commentator (along with fellow SF authorArthur C. Clarke) forWalter Cronkite's coverage of theApollo 11Moon landing.[142]He remarked to Cronkite during the landing that, "This is the greatest event in human history, up to this time. This is—today is New Year's Day of the Year One."[143]
Heinlein has inspired many transformational figures in business and technology includingLee Felsenstein, the designer of the first mass-produced portable computer,[144]Marc Andreessen,[145]co-author of the first widely-used web browser, andElon Musk, CEO ofTeslaand founder ofSpaceX.[146]
The Heinlein Society was founded byVirginia Heinleinon behalf of her husband, to "pay forward" the legacy of the writer to future generations of "Heinlein's Children". The foundation has programs to:
The Heinlein society also established theRobert A. Heinlein Awardin 2003 "for outstanding published works in science fiction and technical writings to inspire the human exploration of space".[147][148]
In his lifetime, Heinlein received fourHugo Awards, forDouble Star,Starship Troopers,Stranger in a Strange Land, andThe Moon Is a Harsh Mistress, and was nominated for fourNebula Awards, forThe Moon Is a Harsh Mistress,Friday,Time Enough for Love, andJob: A Comedy of Justice.[149]He was also given seven Retro-Hugos: two for best novel:Beyond This HorizonandFarmer in the Sky; three for best novella:If This Goes On...,Waldo, andThe Man Who Sold the Moon; one for best novelette: "The Roads Must Roll"; and one for best dramatic presentation: "Destination Moon".[150][151][152]
Heinlein was also nominated for sixHugo Awardsfor the worksHave Space Suit: Will Travel,Glory Road,Time Enough for Love,Friday,Job: A Comedy of JusticeandGrumbles from the Grave, as well as sixRetro Hugo AwardsforMagic, Inc., "Requiem", "Coventry", "Blowups Happen", "Goldfish Bowl", and "The Unpleasant Profession of Jonathan Hoag".
Heinlein won theLocus Awardfor "All-Time Favorite Author" in 1973, and for "All-Time Best Author" in 1988.[153][154]
TheScience Fiction Writers of Americanamed Heinlein its firstGrand Masterin 1974, presented 1975. Officers and past presidents of the Association select a living writer for lifetime achievement (now annually and includingfantasyliterature).[16][17]
In 1977, Heinlein was awarded theInkpot Award,[155]and in 1985, he was awarded theEisner Awards"Bob Clampett Humanitarian Award".[156]
Main-beltasteroid6312 Robheinlein(1990 RH4), discovered on September 14, 1990, byH. E. Holtat Palomar, was named after him.[157]
In 1994 theInternational Astronomical UnionnamedHeinlein crateron Mars in his honor.[158][159]
TheScience Fiction and Fantasy Hall of Fameinducted Heinlein in 1998.[160]
In 2001 the United States Naval Academy created the Robert A. Heinlein Chair in Aerospace Engineering.[161]
Heinlein was the Ghost of Honor at the 2008World Science Fiction Conventionin Denver, Colorado, which held several panels on his works. Nearly seventy years earlier, he had been a Guest of Honor at the same convention.[162]
In 2016, after an intensive online campaign to win a vote for the opening, Heinlein was inducted into theHall of Famous Missourians.[163]His bronze bust, created by Kansas City sculptorE. Spencer Schubert, is on permanent display in theMissouri State CapitolinJefferson City.[164]
The Libertarian Futurist Society has honored eight of Heinlein's novels and two short stories with theirHall of Fameaward.[165]The first two were given during his lifetime forThe Moon Is a Harsh MistressandStranger in a Strange Land. Five more were awarded posthumously forRed Planet,Methuselah's Children,Time Enough for Love, and the short stories "Requiem" and "Coventry".
|
https://en.wikipedia.org/wiki/Robert_A._Heinlein
|
Abinary prefixis aunit prefixthat indicates amultipleof aunit of measurementby an integerpower of two. The most commonly used binary prefixes arekibi(symbol Ki, meaning210= 1024),mebi(Mi, 220=1048576), andgibi(Gi, 230=1073741824). They are most often used ininformation technologyas multipliers ofbitandbyte, when expressing the capacity ofstorage devicesor the size of computerfiles.
The binary prefixes "kibi", "mebi", etc. were defined in 1999 by theInternational Electrotechnical Commission(IEC), in theIEC 60027-2standard(Amendment 2). They were meant to replace themetric (SI)decimal powerprefixes, such as "kilo" (k, 103= 1000), "mega" (M, 106=1000000) and "giga" (G, 109=1000000000),[1]that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold2 × 220=2097152bytes, instead of2 × 106=2000000.
On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds10 × 109=10000000000bytes, or a little more than that, but less than10 × 230=10737418240and a file whose size is listed as "2.3 GB" may have a size closer to2.3 × 230≈2470000000or to2.3 × 109=2300000000, depending on theprogramoroperating systemproviding that measurement. This kind of ambiguity is often confusing to computer system users and has resulted inlawsuits.[2][3]The IEC 60027-2 binary prefixes have been incorporated in theISO/IEC 80000standard and are supported by other standards bodies, including theBIPM, which defines the SI system,[1]: p.121theUSNIST,[4][5]and theEuropean Union.
Prior to the 1999 IEC standard, some industry organizations, such as theJoint Electron Device Engineering Council(JEDEC), noted the common use of the termskilobyte,megabyte, andgigabyte, and the corresponding symbolsKB,MB, andGBin the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such asmagnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes in this sense has persisted in some fields.
In 2022, theInternational Bureau of Weights and Measures(BIPM) adopted the decimal prefixesronnafor 10009andquettafor 100010.[6][7]In 2025, the prefixesrobi(Ri, 10249) andquebi(Qi, 102410) were adopted by the IEC.[8]
The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kibi vs. kilo to nearly 27% for the quebi vs. quetta.
The originalmetric systemadopted by France in 1795 included two binary prefixes nameddouble-(2×) anddemi-(1/2×).[9]However, these were not retained when theSI prefixeswere internationally adopted by the 11thCGPM conferencein 1960.
Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10).[10]For example, theIBM 701(1952) used a binary methods and could address 2048wordsof 36bitseach, while theIBM 702(1953) used a decimal system, and could address ten thousand 7-bit words.
By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of states of theiraddress linesmap to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses.
While early documentation specified those memory sizes as exact numbers such as 4096, 8192, or16384units (usuallywords, bytes, or bits), computer professionals also started using the long-established metric system prefixes "kilo", "mega", "giga", etc., defined to be powers of 10,[1]to mean instead the nearest powers of two; namely, 210= 1024, 220= 10242, 230= 10243, etc.[11][12]The corresponding metric prefix symbols ("k", "M", "G", etc.) were used with the same binary meanings.[13][14]The symbol for 210= 1024 could be written either in lower case ("k")[15][16][17]or in uppercase ("K"). The latter was often used intentionally to indicate the binary rather than decimal meaning.[18]This convention, which could not be extended to higher powers, was widely used in the documentation of theIBM 360(1964)[18]and of theIBM System/370(1972),[19]of theCDC 7600,[20]of the DECPDP-11/70 (1975)[21]and of the DECVAX-11/780(1977).[citation needed]
In other documents, however, the metric prefixes and their symbols were used to denote powers of 10, but usually with the understanding that the values given were approximate, often truncated down. Thus, for example, a 1967 document byControl Data Corporation(CDC) abbreviated "216=64 × 1024=65536words" as "65K words" (rather than "64K" or "66K"),[22]while the documentation of theHP 21MXreal-time computer (1974) denoted3 × 216=192 × 1024=196608as "196K" and 220=1048576as "1M".[23]
These three possible meanings of "k" and "K" ("1024", "1000", or "approximately 1000") were used loosely around the same time, sometimes by the same company. TheHP 3000business computer (1973) could have "64K", "96K", or "128K" bytes of memory.[24]The use of SI prefixes, and the use of "K" instead of "k" remained popular in computer-related publications well into the 21st century, although the ambiguity persisted. The correct meaning was often clear from the context; for instance, in a binary-addressed computer, the true memory size had to be either a power of 2, or a small integer multiple thereof. Thus a "512 megabyte" RAM module was generally understood to have512 × 10242=536870912bytes, rather than512000000.
In specifying disk drive capacities, manufacturers have always used conventional decimal SI prefixes representing powers of 10. Storage in a rotatingdisk driveis organized in platters and tracks whose sizes and counts are determined by mechanical engineering constraints so that the capacity of a disk drive has hardly ever been a simple multiple of a power of 2. For example, the first commercially sold disk drive, theIBM 350(1956), had 50 physical disk platters containing a total of50000sectors of 100 characters each, for a total quoted capacity of 5 million characters.[25]
Moreover, since the 1960s, many disk drives used IBM'sdisk format, where each track was divided into blocks of user-specified size; and the block sizes were recorded on the disk, subtracting from the usable capacity. For example, the IBM 3336 disk pack was quoted to have a 200-megabyte capacity, achieved only with a single13030-byte block in each of its 808 × 19 tracks.
Decimal megabytes were used for disk capacity by the CDC in 1974.[26]The SeagateST-412,[27]one of several types installed in theIBM PC/XT,[28]had a capacity of10027008byteswhen formatted as 306 × 4 tracks and 32 256-byte sectors per track, which was quoted as "10 MB".[29]Similarly, a "300 GB" hard drive can be expected to offer only slightly more than300×109=300000000000, bytes, not300 × 230(which would be about322×109bytes or "322 GB"). The first terabyte (SI prefix,1000000000000bytes) hard disk drive was introduced in 2007.[30]Decimal prefixes were generally used by information processing publications when comparing hard disk capacities.[31]
Some programs and operating systems, such asMicrosoft Windows, still use "MB" and "GB" to denote binary prefixes even when displaying disk drive capacities and file sizes, as didClassic Mac OS. Thus, for example, the capacity of a "10 MB" (decimal "M") disk drive could be reported as "9.56 MB", and that of a "300 GB" drive as "279.4 GB". Some operating systems, such asMac OS X,[32]Ubuntu,[33]andDebian,[34]have been updated to use "MB" and "GB" to denote decimal prefixes when displaying disk drive capacities and file sizes. Some manufacturers, such asSeagate Technology, have released recommendations stating that properly-written software and documentation should specify clearly whether prefixes such as "K", "M", or "G" mean binary or decimal multipliers.[35][36]
Floppy disksuseda variety of formats, and their capacities was usually specified with SI-like prefixes "K" and "M" with either decimal or binary meaning. The capacity of the disks was often specified without accounting for the internalformattingoverhead, leading to more irregularities.
The early 8-inch diskette formats could contain less than a megabyte with the capacities of those devices specified in kilobytes, kilobits or megabits.[37][38]
The 5.25-inch diskette sold with theIBM PC ATcould hold1200 × 1024=1228800bytes, and thus was marketed as "1200 KB" with the binary sense of "KB".[39]However, the capacity was also quoted "1.2 MB",[40]which was a hybrid decimal and binary notation, since the "M" meant 1000 × 1024. The precise value was1.2288 MB(decimal) or1.171875MiB(binary).
The 5.25-inchApple Disk IIhad 256 bytes per sector, 13 sectors per track, 35 tracks per side, or a total capacity of116480bytes. It was later upgraded to 16 sectors per track, giving a total of140 × 210=143360bytes, which was described as "140KB" using the binary sense of "K".
The most recent version of the physical hardware, the "3.5-inch diskette" cartridge, had 720 512-byte blocks (single-sided). Since two blocks comprised 1024 bytes, the capacity was quoted "360 KB", with the binary sense of "K". On the other hand, the quoted capacity of "1.44 MB" of the High Density ("HD") version was again a hybrid decimal and binary notation, since it meant 1440 pairs of 512-byte sectors, or1440 × 210=1474560bytes. Some operating systems displayed the capacity of those disks using the binary sense of "MB", as "1.4 MB" (which would be1.4 × 220≈1468000bytes). User complaints forced both Apple[citation needed]and Microsoft[41]to issue support bulletins explaining the discrepancy.
When specifying the capacities of opticalcompact discs, "megabyte" and "MB" usually meant 10242bytes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about700 MiB, which is approximately730 MB(decimal).[42]
On the other hand, capacities of otheroptical discstorage media likeDVD,Blu-ray Disc,HD DVDandmagneto-optical (MO)have been generally specified in decimal gigabytes ("GB"), that is, 10003bytes. In particular, a typical "4.7 GB" DVD has a nominal capacity of about4.7×109bytes, which is about4.38 GiB.[43]
Tape drive and media manufacturers have generally used SI decimal prefixes to specify the maximum capacity,[44][45]although the actual capacity would depend on theblock sizeused when recording.
Computerclockfrequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the originalIBM PCwas4.77 MHz, that is4770000Hz.
Similarly, digital information transfer rates are quoted using decimal prefixe. TheParallel ATA"100MB/s" disk interface can transfer100000000bytes per second, and a "56 Kb/s" modem transmits56000bits per second. Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes.[35]The standard sampling rate of musiccompact disks, quoted as44.1 kHz, is indeed44100samples per second.[citation needed]A "1 Gb/s"Ethernetinterface can receive or transmit up to 109bits per second, or125000000bytes per second within each packet. A "56k" modem can encode or decode up to56000bits per second.
Decimal SI prefixes are also generally used forprocessor-memory data transferspeeds. APCI-Xbus with66 MHzclock and 64 bits wide can transfer6600000064-bit words per second, or4224000000bit/s=528000000B/s, which is usually quoted as528MB/s. APC3200memory on adouble data ratebus, transferring 8 bytes per cycle with a clock speed of200 MHzhas a bandwidth of200000000× 8 × 2=3200000000B/s, which would be quoted as3.2GB/s.
The ambiguous usage of the prefixes "kilo ("K" or "k"), "mega" ("M"), and "giga" ("G"), as meaning both powers of 1000 or (in computer contexts) of 1024, has been recorded in popular dictionaries,[46][47][48]and even in some obsolete standards, such asANSI/IEEE 1084-1986[49]andANSI/IEEE 1212-1991,[50]IEEE 610.10-1994,[51]andIEEE 100-2000.[52]Some of these standards specifically limited the binary meaning to multiples of "byte" ("B") or "bit" ("b").
Before the IEC standard, several alternative proposals existed for unique binary prefixes, starting in the late 1960s. In 1996,Markus Kuhnproposed the extra prefix "di" and the symbolsuffixorsubscript"2" to mean "binary"; so that, for example, "one dikilobyte" would mean "1024 bytes", denoted "K2B" or "K2B".[53]
In 1968, Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ2to denote 10242, and so on.[54](At the time, memory size was small, and only K was in widespread use.) In the same year,Wallace Givensresponded with a suggestion to use bK as an abbreviation for 1024 and bK2 or bK2for 10242, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day.[55]Bruce Alan MartinofBrookhaven National Laboratoryproposed that, instead of prefixes, binary powers of two were indicated by the letterBfollowed by the exponent, similar toEindecimal scientific notation. Thus one would write 3B20 for3 × 220.[56]This convention is still used on some calculators to present binary floating point-numbers today.[57]
In 1969,Donald Knuth, who uses decimal notation like 1 MB = 1000 kB,[58]proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes", with abbreviations KKB and MMB.[59]
The ambiguous meanings of "kilo", "mega", "giga", etc., has caused significantconsumer confusion, especially in thepersonal computerera. A common source of confusion was the discrepancy between the capacities of hard drives specified by manufacturers, using those prefixes in the decimal sense, and the numbers reported by operating systems and other software, that used them in the binary sense, such as theApple Macintoshin 1984. For example, a hard drive marketed as "1 TB" could be reported as having only "931 GB". The confusion was compounded by fact that RAM manufacturers used the binary sense too.
The different interpretations of disk size prefixes led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives.
Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes.[60][61][62][63]
On 20 February 2004,Willem Vroegh filed a lawsuitagainst Lexar Media, Dane–Elec Memory,Fuji Photo Film USA,Eastman KodakCompany, Kingston Technology Company, Inc.,MemorexProducts, Inc.;PNY TechnologiesInc.,SanDisk Corporation,Verbatim Corporation, andViking Interworksalleging that their descriptions of the capacity of theirflash memorycards were false and misleading.
Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes." The plaintiffs wanted the defendants to use the customary values of 10242for megabyte and 10243for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards.[64]
The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites.[65]The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device".[66]
On 7 July 2005, an action entitledOrin Safier v.Western DigitalCorporation, et al.was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ.[67]
Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date.[68]
Western Digital offered to compensate customers with agratisdownload of backup and recovery software that they valued at US$30. They also paid$500000in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising.[69][70][71]Western Digital had this footnote in their settlement. "Apparently, Plaintiff believes that he could sue an egg company for fraud for labeling a carton of 12 eggs a 'dozen', because some bakers would view a 'dozen' as including 13 items."[72]
A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed againstSeagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with gratis backup software or a 5% refund on the cost of the drives.[73]
On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant,SanDisk, upholding its use of "GB" to mean1000000000bytes.[74]
In 1995, theInternational Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) proposed the prefixes "kibi" (short for "kilobinary"), "mebi" ("megabinary"), "gibi" ("gigabinary") and "tebi" ("terabinary"), with respective symbols "kb", "Mb", "Gb" and "Tb",[75]for binary multipliers. The proposal suggested that the SI prefixes should be used only for powers of 10; so that a disk drive capacity of "500 gigabytes", "0.5 terabytes", "500 GB", or "0.5 TB" should all mean500×109bytes, exactly or approximately, rather than500 × 230(=536870912000) or0.5 × 240(=549755813888).
The proposal was not accepted by IUPAC at the time, but was taken up in 1996 by theInstitute of Electrical and Electronics Engineers(IEEE) in collaboration with theInternational Organization for Standardization(ISO) andInternational Electrotechnical Commission(IEC). The prefixes "kibi", "mebi", "gibi" and "tebi" were retained, but with the symbols "Ki" (with capital "K"), "Mi", "Gi" and "Ti" respectively.[76]
In January 1999, the IEC published this proposal, with additional prefixes "pebi" ("Pi") and "exbi" ("Ei"), as an international standard (IEC 60027-2Amendment 2)[77][78][79]The standard reaffirmed the BIPM's position that the SI prefixes should always denote powers of 10. The third edition of the standard, published in 2005, added prefixes "zebi" and "yobi", thus matching all then-defined SI prefixes with binary counterparts.[80]
The harmonizedISO/IECIEC 80000-13:2025 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities.[81]In 2009, the prefixes kibi-, mebi-, etc. were defined byISO 80000-1in their own right, independently of the kibibyte, mebibyte, and so on.
The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent1024bits (210bits), which is 1 kibibit."[82]
The IEC 60027-2 standard recommended operating systems and other software were updated to use binary or decimal prefixes consistently, but incorrect usage of SI prefixes for binary multiples is still common. At the time, the IEEE decided that their standards would use the prefixes "kilo", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis.[83]
The IEC standard binary prefixes are supported by other standardization bodies and technical organizations.
The United StatesNational Institute of Standards and Technology(NIST) supports the ISO/IEC standards for
"Prefixes for binary multiples" and has a web page[84]documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced asbee.[5]NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them.[85]
As of 2014, the microelectronics industry standards bodyJEDECdescribes the IEC prefixes in its online dictionary, but acknowledges that the SI prefixes and the symbols "K", "M" and "G" are still commonly used with the binary sense for memory sizes.[86][87]
On 19 March 2005, the IEEE standardIEEE 1541-2002("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period.[88][89]as of April 2008[update], the IEEE Publications division does not require the use of IEC prefixes in its major magazines such asSpectrum[90]orComputer.[91]
TheInternational Bureau of Weights and Measures(BIPM), which maintains theInternational System of Units(SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in the SI.[92][1]
TheSociety of Automotive Engineers(SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not cite the IEC binary prefixes.[93]
The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03.[94]The European Union (EU) has required the use of the IEC binary prefixes since 2007.[95]
Some computer industry participants, such as Hewlett-Packard (HP),[96]and IBM[97][98]have adopted or recommended IEC binary prefixes as part of their general documentation policies.
As of 2023, the use of SI prefixes with the binary meanings is still prevalent for specifying the capacity of themain memoryof computers, ofRAM,ROM,EPROM, andEEPROMchipsandmemory modules, and of thecacheofcomputer processors. For example, a "512-megabyte" or "512 MB" memory module holds 512 MiB; that is, 512 × 220bytes, not 512 × 106bytes.[99][100][101][102]
JEDEC continues to include the customary binary definitions of "kilo", "mega", and "giga" in the documentTerms, Definitions, and Letter Symbols,[103]and, as of 2010[update], still used those definitions in theirmemory standards.[104][105][106][107][108]
On the other hand, the SI prefixes with powers of ten meanings are generally used for the capacity of external storage units, such asdisk drives,[109][110][111][112][113]solid state drives, andUSB flash drives,[63]except for someflash memorychips intended to be used asEEPROMs. However, some disk manufacturers have used the IEC prefixes to avoid confusion.[114]The decimal meaning of SI prefixes is usually also intended in measurements of data transfer rates, and clock speeds.[citation needed]
Some operating systems and other software use either the IEC binary multiplier symbols ("Ki", "Mi", etc.)[115][116][117][118][119][120]or the SI multiplier symbols ("k", "M", "G", etc.) with decimal meaning. Some programs, such as theGNUlscommand, let the user choose between binary or decimal multipliers. However, some continue to use the SI symbols with the binary meanings, even when reporting disk or file sizes. Some programs may also use "K" instead of "k", with either meaning.[121]
While the binary prefixes are predominantly used with units of data, bits and bytes, they may be used with other unit of measure. For example, insignal processingit may be convenient to use a binary prefix with the unit of frequency,hertz(Hz), to produce a unit such as thekibihertz(KiHz), which is equal to1024 Hz.[122][123]
|
https://en.wikipedia.org/wiki/Binary_prefix
|
Arootkitis a collection ofcomputer software, typically malicious, designed to enable access to acomputeror an area of itssoftwarethat is not otherwise allowed (for example, to an unauthorized user) and often masks its existence or the existence of other software.[1]The termrootkitis acompoundof "root" (the traditional name of theprivileged accountonUnix-likeoperating systems) and the word "kit" (which refers to the software components that implement the tool).[2]The term "rootkit" has negative connotations through its association withmalware.[1]
Rootkit installation can be automated, or anattackercan install it after having obtained root or administrator access.[3]Obtaining this access is a result of direct attack on a system, i.e. exploiting a vulnerability (such asprivilege escalation) or apassword(obtained bycrackingorsocial engineeringtactics like "phishing"). Once installed, it becomes possible to hide the intrusion as well as to maintain privileged access. Full control over a system means that existing software can be modified, including software that might otherwise be used to detect or circumvent it.
Rootkit detection is difficult because a rootkit may be able to subvert the software that is intended to find it. Detection methods include using an alternative and trustedoperating system, behavior-based methods, signature scanning, difference scanning, andmemory dumpanalysis. Removal can be complicated or practically impossible, especially in cases where the rootkit resides in thekernel; reinstallation of the operating system may be the only available solution to the problem. When dealing withfirmwarerootkits, removal may requirehardwarereplacement, or specialized equipment.
The termrootkit,rkit, orroot kitoriginally referred to a maliciously modified set of administrative tools for aUnix-likeoperating systemthat granted "root" access.[4]If an intruder could replace the standard administrative tools on a system with a rootkit, the intruder could obtain root access over the system whilst simultaneously concealing these activities from the legitimatesystem administrator. These first-generation rootkits were trivial to detect by using tools such asTripwirethat had not been compromised to access the same information.[5][6]Lane Davis and Steven Dake wrote the earliest known rootkit in 1990 forSun Microsystems'SunOSUNIX operating system.[7]In the lecture he gave upon receiving theTuring Awardin 1983,Ken ThompsonofBell Labs, one of the creators ofUnix, theorized about subverting theC compilerin a Unix distribution and discussed the exploit. The modified compiler would detect attempts to compile the Unixlogincommand and generate altered code that would accept not only the user's correct password, but an additional "backdoor" password known to the attacker. Additionally, the compiler would detect attempts to compile a new version of the compiler, and would insert the same exploits into the new compiler. A review of the source code for thelogincommand or the updated compiler would not reveal any malicious code.[8]This exploit was equivalent to a rootkit.
The first documentedcomputer virusto target thepersonal computer, discovered in 1986, usedcloakingtechniques to hide itself: theBrain virusintercepted attempts to read theboot sector, and redirected these to elsewhere on the disk, where a copy of the original boot sector was kept.[1]Over time,DOS-virus cloaking methods became more sophisticated. Advanced techniques includedhookinglow-level diskINT 13HBIOSinterruptcalls to hide unauthorized modifications to files.[1]
The first malicious rootkit for theWindows NToperating system appeared in 1999: a trojan calledNTRootkitcreated byGreg Hoglund.[9]It was followed byHackerDefenderin 2003.[1]The first rootkit targetingMac OS Xappeared in 2009,[10]while theStuxnetworm was the first to targetprogrammable logic controllers(PLC).[11]
In 2005,Sony BMGpublishedCDswithcopy protectionanddigital rights managementsoftware calledExtended Copy Protection, created by software company First 4 Internet. The software included a music player but silently installed a rootkit which limited the user's ability to access the CD.[12]Software engineerMark Russinovich, who created the rootkit detection toolRootkitRevealer, discovered the rootkit on one of his computers.[1]The ensuing scandal raised the public's awareness of rootkits.[13]To cloak itself, the rootkit hid any file starting with "$sys$" from the user. Soon after Russinovich's report, malware appeared which took advantage of the existing rootkit on affected systems.[1]OneBBCanalyst called it a "public relationsnightmare."[14]Sony BMG releasedpatchestouninstallthe rootkit, but it exposed users to an even more serious vulnerability.[15]The company eventually recalled the CDs. In the United States, aclass-action lawsuitwas brought against Sony BMG.[16]
TheGreek wiretapping case 2004–05, also referred to as Greek Watergate,[17]involved the illegaltelephone tappingof more than 100mobile phoneson theVodafone Greecenetwork belonging mostly to members of theGreekgovernment and top-ranking civil servants. The taps began sometime near the beginning of August 2004 and were removed in March 2005 without discovering the identity of the perpetrators. The intruders installed a rootkit targeting Ericsson'sAXE telephone exchange. According toIEEE Spectrum, this was "the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch."[18]The rootkit was designed to patch the memory of the exchange while it was running, enablewiretappingwhile disabling audit logs, patch the commands that list active processes and active data blocks, and modify the data blockchecksumverification command. A "backdoor" allowed an operator withsysadminstatus to deactivate the exchange's transaction log, alarms and access commands related to the surveillance capability.[18]The rootkit was discovered after the intruders installed a faulty update, which causedSMStexts to be undelivered, leading to an automated failure report being generated. Ericsson engineers were called in to investigate the fault and discovered the hidden data blocks containing the list of phone numbers being monitored, along with the rootkit and illicit monitoring software.
Modern rootkits do not elevate access,[4]but rather are used to make another software payload undetectable by adding stealth capabilities.[9]Most rootkits are classified asmalware, because the payloads they are bundled with are malicious. For example, a payload might covertly steal userpasswords,credit cardinformation, computing resources, or conduct other unauthorized activities. A small number of rootkits may be considered utility applications by their users: for example, a rootkit might cloak aCD-ROM-emulation driver, allowingvideo gameusers to defeatanti-piracymeasures that require insertion of the original installation media into a physical optical drive to verify that the software was legitimately purchased.
Rootkits and their payloads have many uses:
In some instances, rootkits provide desired functionality, and may be installed intentionally on behalf of the computer user:
There are at least five types of rootkit, ranging from those at the lowest level in firmware (with the highest privileges), through to the least privileged user-based variants that operate inRing 3. Hybrid combinations of these may occur spanning, for example, user mode and kernel mode.[26]
User-mode rootkits run inRing 3, along with other applications as user, rather than low-level system processes.[27]They have a number of possible installation vectors to intercept and modify the standard behavior of application programming interfaces (APIs). Some inject adynamically linkedlibrary (such as a.DLLfile on Windows, or a .dylib file onMac OS X) into other processes, and are thereby able to execute inside any target process to spoof it; others with sufficient privileges simply overwrite the memory of a target application. Injection mechanisms include:[27]
...since user mode applications all run in their own memory space, the rootkit needs to perform this patching in the memory space of every running application. In addition, the rootkit needs to monitor the system for any new applications that execute and patch those programs' memory space before they fully execute.
Kernel-mode rootkits run with the highest operating system privileges (Ring 0) by adding code or replacing portions of the core operating system, including both thekerneland associateddevice drivers.[citation needed]Most operating systems support kernel-mode device drivers, which execute with the same privileges as the operating system itself. As such, many kernel-mode rootkits are developed as device drivers or loadable modules, such asloadable kernel modulesinLinuxordevice driversinMicrosoft Windows. This class of rootkit has unrestricted security access, but is more difficult to write.[29]The complexity makes bugs common, and any bugs in code operating at the kernel level may seriously impact system stability, leading to discovery of the rootkit.[29]One of the first widely known kernel rootkits was developed forWindows NT 4.0and released inPhrackmagazine in 1999 byGreg Hoglund.[30][31]Kernel rootkits can be especially difficult to detect and remove because they operate at the samesecurity levelas the operating system itself, and are thus able to intercept or subvert the most trusted operating system operations. Any software, such asantivirus software, running on the compromised system is equally vulnerable.[32]In this situation, no part of the system can be trusted.
A rootkit can modify data structures in the Windows kernel using a method known asdirect kernel object manipulation(DKOM).[33]This method can be used to hide processes. A kernel mode rootkit can also hook theSystem Service Descriptor Table(SSDT), or modify the gates between user mode and kernel mode, in order to cloak itself.[4]Similarly for theLinuxoperating system, a rootkit can modify thesystem call tableto subvert kernel functionality.[34][35]It is common that a rootkit creates a hidden, encrypted filesystem in which it can hide other malware or original copies of files it has infected.[36]Operating systems are evolving to counter the threat of kernel-mode rootkits. For example, 64-bit editions of Microsoft Windows now implement mandatory signing of all kernel-level drivers in order to make it more difficult for untrusted code to execute with the highest privileges in a system.[37]
A kernel-mode rootkit variant called abootkitcan infect startup code like theMaster Boot Record(MBR),Volume Boot Record(VBR), orboot sector, and in this way can be used to attackfull disk encryptionsystems.[38]An example of such an attack on disk encryption is the "evil maid attack", in which an attacker installs a bootkit on an unattended computer. The envisioned scenario is a maid sneaking into the hotel room where the victims left their hardware.[39]The bootkit replaces the legitimateboot loaderwith one under their control. Typically the malware loader persists through the transition toprotected modewhen the kernel has loaded, and is thus able to subvert the kernel.[40][41][42]For example, the "Stoned Bootkit" subverts the system by using a compromisedboot loaderto intercept encryption keys and passwords.[43][self-published source?]In 2010, the Alureon rootkit has successfully subverted the requirement for 64-bit kernel-mode driver signing inWindows 7, by modifying themaster boot record.[44]Although not malware in the sense of doing something the user doesn't want, certain "Vista Loader" or "Windows Loader" software work in a similar way by injecting anACPISLIC (System Licensed Internal Code) table in the RAM-cached version of the BIOS during boot, in order to defeat theWindows Vista and Windows 7 activation process.[citation needed]This vector of attack was rendered useless in the (non-server) versions ofWindows 8, which use a unique, machine-specific key for each system, that can only be used by that one machine.[45]Many antivirus companies provide free utilities and programs to remove bootkits.
Rootkits have been created as Type IIHypervisorsin academia as proofs of concept. By exploiting hardware virtualization features such asIntel VTorAMD-V, this type of rootkit runs in Ring -1 and hosts the target operating system as avirtual machine, thereby enabling the rootkit to intercept hardware calls made by the original operating system.[6]Unlike normal hypervisors, they do not have to load before the operating system, but can load into an operating system before promoting it into a virtual machine.[6]A hypervisor rootkit does not have to make any modifications to the kernel of the target to subvert it; however, that does not mean that it cannot be detected by the guest operating system. For example, timing differences may be detectable inCPUinstructions.[6]The "SubVirt" laboratory rootkit, developed jointly byMicrosoftandUniversity of Michiganresearchers, is an academic example of a virtual-machine–based rootkit (VMBR),[46]whileBlue Pillsoftware is another. In 2009, researchers from Microsoft andNorth Carolina State Universitydemonstrated a hypervisor-layer anti-rootkit calledHooksafe, which provides generic protection against kernel-mode rootkits.[47]Windows 10introduced a new feature called "Device Guard", that takes advantage of virtualization to provide independent external protection of an operating system against rootkit-type malware.[48]
Afirmwarerootkit uses device or platform firmware to create a persistent malware image in hardware, such as arouter,network card,[49]hard drive, or the systemBIOS.[27][50]The rootkit hides in firmware, because firmware is not usually inspected forcode integrity. John Heasman demonstrated the viability of firmware rootkits in bothACPIfirmware routines[51]and in aPCIexpansion cardROM.[52]In October 2008, criminals tampered with Europeancredit-card-reading machines before they were installed. The devices intercepted and transmitted credit card details via a mobile phone network.[53]In March 2009, researchers Alfredo Ortega andAnibal Saccopublished details of aBIOS-level Windows rootkit that was able to survive disk replacement and operating system re-installation.[54][55][56]A few months later they learned that some laptops are sold with a legitimate rootkit, known as AbsoluteCompuTraceor AbsoluteLoJack for Laptops, preinstalled in many BIOS images. This is an anti-thefttechnology system that researchers showed can be turned to malicious purposes.[24]
Intel Active Management Technology, part ofIntel vPro, implementsout-of-band management, giving administratorsremote administration,remote management, andremote controlof PCs with no involvement of the host processor or BIOS, even when the system is powered off. Remote administration includes remote power-up and power-down, remote reset, redirected boot, console redirection, pre-boot access to BIOS settings, programmable filtering for inbound and outbound network traffic, agent presence checking, out-of-band policy-based alerting, access to system information, such as hardware asset information, persistent event logs, and other information that is stored in dedicated memory (not on the hard drive) where it is accessible even if the OS is down or the PC is powered off. Some of these functions require the deepest level of rootkit, a second non-removable spy computer built around the main computer. Sandy Bridge and future chipsets have "the ability to remotely kill and restore a lost or stolen PC via 3G". Hardware rootkits built into thechipsetcan help recover stolen computers, remove data, or render them useless, but they also present privacy and security concerns of undetectable spying and redirection by management or hackers who might gain control.
Rootkits employ a variety of techniques to gain control of a system; the type of rootkit influences the choice of attack vector. The most common technique leveragessecurity vulnerabilitiesto achieve surreptitiousprivilege escalation. Another approach is to use aTrojan horse, deceiving a computer user into trusting the rootkit's installation program as benign—in this case,social engineeringconvinces a user that the rootkit is beneficial.[29]The installation task is made easier if theprinciple of least privilegeis not applied, since the rootkit then does not have to explicitly request elevated (administrator-level) privileges. Other classes of rootkits can be installed only by someone with physical access to the target system. Some rootkits may also be installed intentionally by the owner of the system or somebody authorized by the owner, e.g. for the purpose ofemployee monitoring, rendering such subversive techniques unnecessary.[57]Some malicious rootkit installations are commercially driven, with a pay-per-install (PPI) compensation method typical for distribution.[58][59]
Once installed, a rootkit takes active measures to obscure its presence within the host system through subversion or evasion of standard operating systemsecuritytools andapplication programming interface(APIs) used for diagnosis, scanning, and monitoring.[60]Rootkits achieve this by modifying the behavior ofcore parts of an operating systemthrough loading code into other processes, the installation or modification ofdrivers, orkernel modules. Obfuscation techniques include concealing running processes from system-monitoring mechanisms and hiding system files and other configuration data.[61]It is not uncommon for a rootkit to disable theevent loggingcapacity of an operating system, in an attempt to hide evidence of an attack. Rootkits can, in theory, subvertanyoperating system activities.[62]The "perfect rootkit" can be thought of as similar to a "perfect crime": one that nobody realizes has taken place. Rootkits also take a number of measures to ensure their survival against detection and "cleaning" by antivirus software in addition to commonly installing into Ring 0 (kernel-mode), where they have complete access to a system. These includepolymorphism(changing so their "signature" is hard to detect), stealth techniques, regeneration, disabling or turning off anti-malware software,[63]and not installing onvirtual machineswhere it may be easier for researchers to discover and analyze them.
The fundamental problem with rootkit detection is that if the operating system has been subverted, particularly by a kernel-level rootkit, it cannot be trusted to find unauthorized modifications to itself or its components.[62]Actions such as requesting a list of running processes, or a list of files in a directory, cannot be trusted to behave as expected. In other words, rootkit detectors that work while running on infected systems are only effective against rootkits that have some defect in their camouflage, or that run with lower user-mode privileges than the detection software in the kernel.[29]As withcomputer viruses, the detection and elimination of rootkits is an ongoing struggle between both sides of this conflict.[62]Detection can take a number of different approaches, including looking for virus "signatures" (e.g. antivirus software), integrity checking (e.g.digital signatures), difference-based detection (comparison of expected vs. actual results), and behavioral detection (e.g. monitoring CPU usage or network traffic).
For kernel-mode rootkits, detection is considerably more complex, requiring careful scrutiny of the System Call Table to look forhooked functionswhere the malware may be subverting system behavior,[64]as well asforensicscanning of memory for patterns that indicate hidden processes. Unix rootkit detection offerings include Zeppoo,[65]chkrootkit,rkhunterandOSSEC. For Windows, detection tools include Microsoft SysinternalsRootkitRevealer,[66]Avast Antivirus,[67]SophosAnti-Rootkit,[68]F-Secure,[69]Radix,[70]GMER,[71]andWindowsSCOPE. Any rootkit detectors that prove effective ultimately contribute to their own ineffectiveness, as malware authors adapt and test their code to escape detection by well-used tools.[Notes 1]Detection by examining storage while the suspect operating system is not operational can miss rootkits not recognised by the checking software, as the rootkit is not active and suspicious behavior is suppressed; conventional anti-malware software running with the rootkit operational may fail if the rootkit hides itself effectively.
The best and most reliable method for operating-system-level rootkit detection is to shut down the computer suspected of infection, and then to check itsstoragebybootingfrom an alternative trusted medium (e.g. a "rescue"CD-ROMorUSB flash drive).[72]The technique is effective because a rootkit cannot actively hide its presence if it is not running.
The behavioral-based approach to detecting rootkits attempts to infer the presence of a rootkit by looking for rootkit-like behavior. For example, byprofilinga system, differences in the timing and frequency of API calls or in overall CPU utilization can be attributed to a rootkit. The method is complex and is hampered by a high incidence offalse positives. Defective rootkits can sometimes introduce very obvious changes to a system: theAlureonrootkit crashed Windows systems after a security update exposed a design flaw in its code.[73][74]Logs from apacket analyzer,firewall, orintrusion prevention systemmay present evidence of rootkit behaviour in a networked environment.[26]
Antivirus products rarely catch all viruses in public tests (depending on what is used and to what extent), even though security software vendors incorporate rootkit detection into their products. Should a rootkit attempt to hide during an antivirus scan, a stealth detector may notice; if the rootkit attempts to temporarily unload itself from the system, signature detection (or "fingerprinting") can still find it.[75]This combined approach forces attackers to implement counterattack mechanisms, or "retro" routines, that attempt to terminate antivirus programs. Signature-based detection methods can be effective against well-published rootkits, but less so against specially crafted, custom-root rootkits.[62]
Another method that can detect rootkits compares "trusted" raw data with "tainted" content returned by anAPI. For example,binariespresent on disk can be compared with their copies withinoperating memory(in some operating systems, the in-memory image should be identical to the on-disk image), or the results returned fromfile systemorWindows RegistryAPIs can be checked against raw structures on the underlying physical disks[62][76]—however, in the case of the former, some valid differences can be introduced by operating system mechanisms like memory relocation orshimming. A rootkit may detect the presence of such a difference-based scanner orvirtual machine(the latter being commonly used to perform forensic analysis), and adjust its behaviour so that no differences can be detected. Difference-based detection was used byRussinovich'sRootkitRevealertool to find the Sony DRM rootkit.[1]
Code signingusespublic-key infrastructureto check if a file has been modified since beingdigitally signedby its publisher. Alternatively, a system owner or administrator can use acryptographic hash functionto compute a "fingerprint" at installation time that can help to detect subsequent unauthorized changes to on-disk code libraries.[77]However, unsophisticated schemes check only whether the code has been modified since installation time; subversion prior to that time is not detectable. The fingerprint must be re-established each time changes are made to the system: for example, after installing security updates or aservice pack. The hash function creates amessage digest, a relatively short code calculated from each bit in the file using an algorithm that creates large changes in the message digest with even smaller changes to the original file. By recalculating and comparing the message digest of the installed files at regular intervals against a trusted list of message digests, changes in the system can be detected and monitored—as long as the original baseline was created before the malware was added.
More-sophisticated rootkits are able to subvert the verification process by presenting an unmodified copy of the file for inspection, or by making code modifications only in memory, reconfiguration registers, which are later compared to a white list of expected values.[78]The code that performs hash, compare, or extend operations must also be protected—in this context, the notion of animmutable root-of-trustholds that the very first code to measure security properties of a system must itself be trusted to ensure that a rootkit or bootkit does not compromise the system at its most fundamental level.[79]
Forcing a complete dump ofvirtual memorywill capture an active rootkit (or akernel dumpin the case of a kernel-mode rootkit), allowing offlineforensic analysisto be performed with adebuggeragainst the resultingdump file, without the rootkit being able to take any measures to cloak itself. This technique is highly specialized, and may require access to non-publicsource codeordebugging symbols. Memory dumps initiated by the operating system cannot always be used to detect a hypervisor-based rootkit, which is able to intercept and subvert the lowest-level attempts to read memory[6]—a hardware device, such as one that implements anon-maskable interrupt, may be required to dump memory in this scenario.[80][81]Virtual machinesalso make it easier to analyze the memory of a compromised machine from the underlying hypervisor, so some rootkits will avoid infecting virtual machines for this reason.
Manual removal of a rootkit is often extremely difficult for a typical computer user,[27]but a number of security-software vendors offer tools to automatically detect and remove some rootkits, typically as part of anantivirus suite. As of 2005[update], Microsoft's monthlyWindows Malicious Software Removal Toolis able to detect and remove some classes of rootkits.[82][83]Also, Windows Defender Offline can remove rootkits, as it runs from a trusted environment before the operating system starts.[84]Some antivirus scanners can bypassfile systemAPIs, which are vulnerable to manipulation by a rootkit. Instead, they access raw file system structures directly, and use this information to validate the results from the system APIs to identify any differences that may be caused by a rootkit.[Notes 2][85][86][87][88]There are experts who believe that the only reliable way to remove them is to re-install the operating system from trusted media.[89][90]This is because antivirus and malware removal tools running on an untrusted system may be ineffective against well-written kernel-mode rootkits. Booting an alternative operating system from trusted media can allow an infected system volume to be mounted and potentially safely cleaned and critical data to be copied off—or, alternatively, a forensic examination performed.[26]Lightweight operating systems such asWindows PE,Windows Recovery Console,Windows Recovery Environment,BartPE, orLive Distroscan be used for this purpose, allowing the system to be "cleaned". Even if the type and nature of a rootkit is known, manual repair may be impractical, while re-installing the operating system and applications is safer, simpler and quicker.[89]
Systemhardeningrepresents one of the first layers of defence against a rootkit, to prevent it from being able to install.[91]Applyingsecurity patches, implementing theprinciple of least privilege, reducing theattack surfaceand installing antivirus software are some standard security best practices that are effective against all classes of malware.[92]New secure boot specifications likeUEFIhave been designed to address the threat of bootkits, but even these are vulnerable if the security features they offer are not utilized.[50]For server systems, remote server attestation using technologies such as IntelTrusted Execution Technology(TXT) provide a way of verifying that servers remain in a known good state. For example,MicrosoftBitlocker's encryption of data-at-rest verifies that servers are in a known "good state" on bootup.PrivateCorevCage is a software offering that secures data-in-use (memory) to avoid bootkits and rootkits by verifying servers are in a known "good" state on bootup. The PrivateCore implementation works in concert with Intel TXT and locks down server system interfaces to avoid potential bootkits and rootkits.
Another defense mechanism called the Virtual Wall (VTW) approach, serves as a lightweight hypervisor with rootkit detection and event tracing capabilities. In normal operation (guest mode), Linux runs, and when a loaded LKM violates security policies, the system switches to host mode. The VTW in host mode detects, traces, and classifies rootkit events based on memory access control and event injection mechanisms. Experimental results demonstrate the VTW's effectiveness in timely detection and defense against kernel rootkits with minimal CPU overhead (less than 2%). The VTW is compared favorably to other defense schemes, emphasizing its simplicity in implementation and potential performance gains on Linux servers.[93]
|
https://en.wikipedia.org/wiki/Rootkit
|
Incomputer science, acontinuationis anabstract representationof thecontrol stateof acomputer program. A continuation implements (reifies) the program control state, i.e. the continuation is a data structure that represents the computational process at a given point in the process's execution; the created data structure can be accessed by the programming language, instead of being hidden in theruntime environment. Continuations are useful for encoding other control mechanisms in programming languages such asexceptions,generators,coroutines, and so on.
The "current continuation" or "continuation of the computation step" is the continuation that, from the perspective of running code, would be derived from the current point in a program's execution. The termcontinuationscan also be used to refer tofirst-class continuations, which are constructs that give aprogramming languagethe ability to save the execution state at any point and return to that point at a later point in the program, possibly multiple times.
The earliest description of continuations was made byAdriaan van Wijngaardenin September 1964. Wijngaarden spoke at the IFIP Working Conference on Formal Language Description Languages held in Baden bei Wien, Austria. As part of a formulation for anAlgol 60preprocessor, he called for a transformation of proper procedures intocontinuation-passing style,[1]though he did not use this name, and his intention was to simplify a program and thus make its result more clear.
Christopher Strachey,Christopher P. WadsworthandJohn C. Reynoldsbrought the termcontinuationinto prominence in their work in the field ofdenotational semanticsthat makes extensive use of continuations to allow sequential programs to be analysed in terms offunctional programmingsemantics.[1]
Steve Russell[2]invented the continuation in his secondLispimplementation for theIBM 704, though he did not name it.[3]
Reynolds (1993)gives a complete history of the discovery of continuations.
First-class continuations are a language's ability to completely control the execution order of instructions. They can be used to jump to a function that produced the call to the current function, or to a function that has previously exited. One can think of a first-class continuation as saving theexecutionstate of the program. True first-class continuations do not save program data – unlike aprocess image– only the execution context. This is illustrated by the "continuation sandwich" description:
Say you're in the kitchen in front of the refrigerator, thinking about a sandwich. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwich, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwich. But fortunately, there's a sandwich on the counter, and all the materials used to make it are gone. So you eat it. :-)[4]
In this description, the sandwich is part of the programdata(e.g., an object on the heap), and rather than calling a "make sandwich" routine and then returning, the person called a "make sandwich with current continuation" routine, which creates the sandwich and then continues where execution left off.
Schemewas the first full production system providing first "catch"[1]and thencall/cc. Bruce Duba introduced call/cc intoSML.
Continuations are also used in models of computation includingdenotational semantics, theactor model,process calculi, andlambda calculus. These models rely on programmers or semantics engineers to write mathematical functions in the so-calledcontinuation-passing style. This means that each function consumes a function that represents the rest of the computation relative to this function call. To return a value, the function calls this "continuation function" with a return value; to abort the computation it returns a value.
Functional programmers who write their programs incontinuation-passing stylegain the expressive power to manipulate the flow of control in arbitrary ways. The cost is that they must maintain the invariants of control and continuations by hand, which can be a highly complex undertaking (but see 'continuation-passing style' below).
Continuations simplify and clarify the implementation of several commondesign patterns, includingcoroutines/green threadsandexception handling, by providing the basic, low-level primitive which unifies these seemingly unconnected patterns. Continuations can provide elegant solutions to some difficult high-level problems, like programming a web server that supports multiple pages, accessed by the use of the forward and back buttons and by following links. TheSmalltalkSeasideweb framework uses continuations to great effect, allowing one to program the web server in procedural style, by switching continuations when switching pages.
More complex constructs for which"continuations provide an elegant description"[1]also exist. For example, inC,longjmpcan be used to jump from the middle of onefunctionto another, provided the second function lies deeper in the stack (if it is waiting for the first function to return, possibly among others). Other more complex examples includecoroutinesinSimula 67,Lua, andPerl; tasklets inStackless Python;generatorsinIconandPython; continuations inScala(starting in 2.8);fibersinRuby(starting in 1.9.1); thebacktrackingmechanism inProlog;monadsinfunctional programming; andthreads.
TheSchemeprogramming language includes the control operatorcall-with-current-continuation(abbreviated as: call/cc) with which a Scheme program can manipulate the flow of control:
Using the above, the following code block defines a functiontestthat setsthe-continuationto the future execution state of itself:
For a gentler introduction to this mechanism, seecall-with-current-continuation.
This example shows a possible usage of continuations to implementcoroutinesas separate threads.[5]
The functions defined above allow for defining and executing threads throughcooperative multitasking, i.e. threads that yield control to the next one in a queue:
The previous code will produce this output:
A program must allocate space in memory for the variables its functions use. Most programming languages use acall stackfor storing the variables needed because it allows for fast and simple allocating and automatic deallocation of memory. Other programming languages use aheapfor this, which allows for flexibility at a higher cost for allocating and deallocating memory. Both of these implementations have benefits and drawbacks in the context of continuations.[6]
Many programming languages exhibit first-class continuations under various names; specifically:
In any language which supportsclosuresandproper tail calls, it is possible to write programs incontinuation-passing styleand manually implement call/cc. (In continuation-passing style, call/cc becomes a simple function that can be written withlambda.) This is a particularly common strategy inHaskell, where it is easy to construct a "continuation-passingmonad" (for example, theContmonad andContTmonad transformer in themtllibrary). The support forproper tail callsis needed because in continuation-passing style no function ever returns;allcalls are tail calls.
One area that has seen practical use of continuations is inWeb programming.[7][8]The use of continuations shields the programmer from thestatelessnature of theHTTPprotocol. In the traditional model of web programming, the lack of state is reflected in the program's structure, leading to code constructed around a model that lends itself very poorly to expressing computational problems. Thus continuations enable code that has the useful properties associated withinversion of control, while avoiding its problems. "Inverting back the inversion of control or, Continuations versus page-centric programming"[9]is a paper that provides a good introduction to continuations applied to web programming.
Support for continuations varies widely. A programming language supportsre-invocablecontinuations if a continuation may be invoked repeatedly (even after it has already returned). Re-invocable continuations were introduced byPeter J. Landinusing hisJ(for Jump) operator that could transfer the flow of control back into the middle of a procedure invocation. Re-invocable continuations have also been called "re-entrant" in theRacketlanguage. However this use of the term "re-entrant" can be easily confused with its use in discussions ofmultithreading.
A more limited kind is theescape continuationthat may be used to escape the current context to a surrounding one. Many languages which do not explicitly support continuations supportexception handling, which is equivalent to escape continuations and can be used for the same purposes. C'ssetjmp/longjmpare also equivalent: they can only be used tounwind the stack. Escape continuations can also be used to implementtail call elimination.
One generalization of continuations aredelimited continuations. Continuation operators likecall/cccapture theentireremaining computation at a given point in the program and provide no way of delimiting this capture. Delimited continuation operators address this by providing two separate control mechanisms: apromptthat delimits a continuation operation and areificationoperator such asshiftorcontrol. Continuations captured using delimited operators thus only represent a slice of the program context.
Continuations are the functional expression of theGOTOstatement, and the same caveats apply.[10]While they are a sensible option in some special cases such as web programming, use of continuations can result in code that is difficult to follow. In fact, theesoteric programming languageUnlambdaincludescall-with-current-continuationas one of its features solely because expressions involving it "tend to be hopelessly difficult to track down".[11]The external links below illustrate the concept in more detail.
In "Continuations and the nature of quantification",Chris Barkerintroduced the "continuation hypothesis", that
some linguistic expressions (in particular, QNPs [quantificational noun phrases]) have denotations that manipulate their own continuations.[12]
Barker argued that this hypothesis could be used to explain phenomena such asduality of NP meaning(e.g., the fact that the QNP "everyone" behaves very differently from the non-quantificational noun phrase "Bob" in contributing towards the meaning of a sentence like "Alice sees [Bob/everyone]"),scope displacement(e.g., that "a raindrop fell on every car" is interpreted typically as∀c∃r,fell(r,c){\displaystyle \forall c\exists r,{\mbox{fell}}(r,c)}rather than as∃r∀c,fell(r,c){\displaystyle \exists r\forall c,{\mbox{fell}}(r,c)}), andscope ambiguity(that a sentence like "someone saw everyone" may be ambiguous between∃x∀y,saw(x,y){\displaystyle \exists x\forall y,{\mbox{saw}}(x,y)}and∀y∃x,saw(x,y){\displaystyle \forall y\exists x,{\mbox{saw}}(x,y)}). He also observed that this idea is in a way just a natural extension ofRichard Montague's approachin "The Proper Treatment of Quantification in Ordinary English" (PTQ), writing that "with the benefit of hindsight, a limited form of continuation-passing is clearly discernible at the core of Montague’s (1973) PTQ treatment of NPs as generalized quantifiers".
The extent to which continuations can be used to explain other general phenomena in natural language is a topic of current research.[13]
|
https://en.wikipedia.org/wiki/Continuation
|
Inmathematics,quantalesare certainpartially orderedalgebraic structuresthat generalizelocales(point free topologies) as well as various multiplicativelatticesofidealsfromring theoryandfunctional analysis(C*-algebras,von Neumann algebras).[1]Quantales are sometimes referred to ascompleteresiduated semigroups.
Aquantaleis acomplete latticeQ{\displaystyle Q}with anassociativebinary operation∗:Q×Q→Q{\displaystyle \ast \colon Q\times Q\to Q}, called itsmultiplication, satisfying a distributive property such that
and
for allx,yi∈Q{\displaystyle x,y_{i}\in Q}andi∈I{\displaystyle i\in I}(hereI{\displaystyle I}is anyindex set). The quantale isunitalif it has anidentity elemente{\displaystyle e}for its multiplication:
for allx∈Q{\displaystyle x\in Q}. In this case, the quantale is naturally amonoidwith respect to its multiplication∗{\displaystyle \ast }.
A unital quantale may be defined equivalently as amonoidin the categorySupof completejoin-semilattices.
A unital quantale is an idempotentsemiringunder join and multiplication.
A unital quantale in which the identity is thetop elementof the underlying lattice is said to bestrictly two-sided(or simplyintegral).
Acommutative quantaleis a quantale whose multiplication iscommutative. Aframe, with its multiplication given by themeetoperation, is a typical example of a strictly two-sided commutative quantale. Another simple example is provided by theunit intervaltogether with its usualmultiplication.
Anidempotent quantaleis a quantale whose multiplication isidempotent. Aframeis the same as an idempotent strictly two-sided quantale.
Aninvolutive quantaleis a quantale with an involution
that preserves joins:
Aquantalehomomorphismis amapf:Q1→Q2{\displaystyle f\colon Q_{1}\to Q_{2}}that preserves joins and multiplication for allx,y,xi∈Q1{\displaystyle x,y,x_{i}\in Q_{1}}andi∈I{\displaystyle i\in I}:
Thismathematics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Quantale
|
Inmathematics, aDirac measureassigns a size to a set based solely on whether it contains a fixed elementxor not. It is one way of formalizing the idea of theDirac delta function, an important tool in physics and other technical fields.
ADirac measureis ameasureδxon a setX(with anyσ-algebraofsubsetsofX) defined for a givenx∈Xand any(measurable) setA⊆Xby
where1Ais theindicator functionofA.
The Dirac measure is aprobability measure, and in terms of probability it represents thealmost sureoutcomexin thesample spaceX. We can also say that the measure is a singleatomatx; however, treating the Dirac measure as an atomic measure is not correct when we consider the sequential definition of Dirac delta, as the limit of adelta sequence[dubious–discuss]. The Dirac measures are theextreme pointsof the convex set of probability measures onX.
The name is a back-formation from theDirac delta function; considered as aSchwartz distribution, for example on thereal line, measures can be taken to be a special kind of distribution. The identity
which, in the form
is often taken to be part of the definition of the "delta function", holds as a theorem ofLebesgue integration.
Letδxdenote the Dirac measure centred on some fixed pointxin somemeasurable space(X, Σ).
Suppose that(X,T)is atopological spaceand thatΣis at least as fine as theBorelσ-algebraσ(T)onX.
Adiscrete measureis similar to the Dirac measure, except that it is concentrated at countably many points instead of a single point. More formally, ameasureon thereal lineis called adiscrete measure(in respect to theLebesgue measure) if itssupportis at most acountable set.
|
https://en.wikipedia.org/wiki/Dirac_measure
|
Thehistory of mathematicsdeals with the origin of discoveries inmathematicsand themathematical methods and notation of the past. Before themodern ageand the worldwide spread of knowledge, written examples of new mathematical developments have come to light only in a few locales. From 3000 BC theMesopotamianstates ofSumer,AkkadandAssyria, followed closely byAncient Egyptand the Levantine state ofEblabegan usingarithmetic,algebraandgeometryfor purposes oftaxation,commerce, trade and also in the field ofastronomyto record time and formulatecalendars.
The earliest mathematical texts available are fromMesopotamiaandEgypt–Plimpton 322(Babylonianc.2000– 1900 BC),[2]theRhind Mathematical Papyrus(Egyptianc. 1800 BC)[3]and theMoscow Mathematical Papyrus(Egyptian c. 1890 BC). All of these texts mention the so-calledPythagorean triples, so, by inference, thePythagorean theoremseems to be the most ancient and widespread mathematical development after basic arithmetic and geometry.
The study of mathematics as a "demonstrative discipline" began in the 6th century BC with thePythagoreans, who coined the term "mathematics" from the ancientGreekμάθημα(mathema), meaning "subject of instruction".[4]Greek mathematicsgreatly refined the methods (especially through the introduction of deductive reasoning andmathematical rigorinproofs) and expanded the subject matter of mathematics.[5]Theancient Romansusedapplied mathematicsinsurveying,structural engineering,mechanical engineering,bookkeeping, creation oflunarandsolar calendars, and evenarts and crafts.Chinese mathematicsmade early contributions, including aplace value systemand the first use ofnegative numbers.[6][7]TheHindu–Arabic numeral systemand the rules for the use of its operations, in use throughout the world today evolved over the course of the first millennium AD inIndiaand were transmitted to theWestern worldviaIslamic mathematicsthrough the work ofMuḥammad ibn Mūsā al-Khwārizmī.[8][9]Islamic mathematics, in turn, developed and expanded the mathematics known to these civilizations.[10]Contemporaneous with but independent of these traditions were the mathematics developed by theMaya civilizationofMexicoandCentral America, where the concept ofzerowas given a standard symbol inMaya numerals.
Many Greek and Arabic texts on mathematics weretranslated into Latinfrom the 12th century onward, leading to further development of mathematics inMedieval Europe. From ancient times through theMiddle Ages, periods of mathematical discovery were often followed by centuries of stagnation.[11]Beginning inRenaissanceItalyin the 15th century, new mathematical developments, interacting with new scientific discoveries, were made at anincreasing pacethat continues through the present day. This includes the groundbreaking work of bothIsaac NewtonandGottfried Wilhelm Leibnizin the development of infinitesimalcalculusduring the course of the 17th century and following discoveries ofGerman mathematicianslikeCarl Friedrich GaussandDavid Hilbert.
The origins of mathematical thought lie in the concepts ofnumber,patterns in nature,magnitude, andform.[12]Modern studies of animal cognition have shown that these concepts are not unique to humans. Such concepts would have been part of everyday life inhunter-gatherersocieties. The idea of the "number" concept evolving gradually over time is supported by the existence of languages which preserve the distinction between "one", "two", and "many", but not of numbers larger than two.[12]
The use of yarn byNeanderthalssome 40,000 years ago at a site in Abri du Maras in the south ofFrancesuggests they knew basic concepts in mathematics.[13][14]TheIshango bone, found near the headwaters of theNileriver (northeasternCongo), may be more than20,000years old and consists of a series of marks carved in three columns running the length of the bone. Common interpretations are that the Ishango bone shows either atallyof the earliest known demonstration ofsequencesofprime numbers[15][failed verification]or a six-month lunar calendar.[16]Peter Rudman argues that the development of the concept of prime numbers could only have come about after the concept of division, which he dates to after 10,000 BC, with prime numbers probably not being understood until about 500 BC. He also writes that "no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10."[17]The Ishango bone, according to scholarAlexander Marshack, may have influenced the later development of mathematics in Egypt as, like some entries on the Ishango bone, Egyptian arithmetic also made use of multiplication by 2; this however, is disputed.[18]
Predynastic Egyptiansof the 5th millennium BC pictorially represented geometric designs. It has been claimed thatmegalithicmonuments inEnglandandScotland, dating from the 3rd millennium BC, incorporate geometric ideas such ascircles,ellipses, andPythagorean triplesin their design.[19]All of the above are disputed however, and the currently oldest undisputed mathematical documents are from Babylonian and dynastic Egyptian sources.[20]
Babylonianmathematics refers to any mathematics of the peoples ofMesopotamia(modernIraq) from the days of the earlySumeriansthrough theHellenistic periodalmost to the dawn ofChristianity.[21]The majority of Babylonian mathematical work comes from two widely separated periods: The first few hundred years of the second millennium BC (Old Babylonian period), and the last few centuries of the first millennium BC (Seleucidperiod).[22]It is named Babylonian mathematics due to the central role ofBabylonas a place of study. Later under theArab Empire, Mesopotamia, especiallyBaghdad, once again became an important center of study forIslamic mathematics.
In contrast to the sparsity of sources inEgyptian mathematics, knowledge of Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s.[23]Written inCuneiform script, tablets were inscribed whilst the clay was moist, and baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework.[24]
The earliest evidence of written mathematics dates back to the ancientSumerians, who built the earliest civilization in Mesopotamia. They developed a complex system ofmetrologyfrom 3000 BC that was chiefly concerned with administrative/financial counting, such as grain allotments, workers, weights of silver, or even liquids, among other things.[25]From around 2500 BC onward, the Sumerians wrotemultiplication tableson clay tablets and dealt with geometrical exercises anddivisionproblems. The earliest traces of the Babylonian numerals also date back to this period.[26]
Babylonian mathematics were written using asexagesimal(base-60)numeral system.[23]From this derives the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 × 6) degrees in a circle, as well as the use of seconds and minutes of arc to denote fractions of a degree. It is thought the sexagesimal system was initially used by Sumerian scribes because 60 can be evenly divided by 2, 3, 4, 5, 6, 10, 12, 15, 20 and 30,[23]and for scribes (doling out the aforementioned grain allotments, recording weights of silver, etc.) being able to easily calculate by hand was essential, and so a sexagesimal system is pragmatically easier to calculate by hand with; however, there is the possibility that using a sexagesimal system was an ethno-linguistic phenomenon (that might not ever be known), and not a mathematical/practical decision.[27]Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a place-value system, where digits written in the left column represented larger values, much as in thedecimalsystem. The power of the Babylonian notational system lay in that it could be used to represent fractions as easily as whole numbers; thus multiplying two numbers that contained fractions was no different from multiplying integers, similar to modern notation. The notational system of the Babylonians was the best of any civilization until theRenaissance, and its power allowed it to achieve remarkable computational accuracy; for example, the Babylonian tabletYBC 7289gives an approximation of√2accurate to five decimal places.[28]The Babylonians lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context.[22]By the Seleucid period, the Babylonians had developed a zero symbol as a placeholder for empty positions; however it was only used for intermediate positions.[22]This zero sign does not appear in terminal positions, thus the Babylonians came close but did not develop a true place value system.[22]
Other topics covered by Babylonian mathematics include fractions, algebra, quadratic and cubic equations, and the calculation ofregular numbers, and theirreciprocalpairs.[29]The tablets also include multiplication tables and methods for solvinglinear,quadratic equationsandcubic equations, a remarkable achievement for the time.[30]Tablets from the Old Babylonian period also contain the earliest known statement of thePythagorean theorem.[31]However, as with Egyptian mathematics, Babylonian mathematics shows no awareness of the difference between exact and approximate solutions, or the solvability of a problem, and most importantly, no explicit statement of the need forproofsor logical principles.[24]
Egyptianmathematics refers to mathematics written in theEgyptian language. From theHellenistic period,Greekreplaced Egyptian as the written language ofEgyptianscholars. Mathematical study inEgyptlater continued under theArab Empireas part ofIslamic mathematics, whenArabicbecame the written language of Egyptian scholars. Archaeological evidence has suggested that the Ancient Egyptian counting system had origins in Sub-Saharan Africa.[32]Also, fractal geometry designs which are widespread among Sub-Saharan African cultures are also found in Egyptian architecture and cosmological signs.[33]
The most extensive Egyptian mathematical text is theRhind papyrus(sometimes also called the Ahmes Papyrus after its author), dated to c. 1650 BC but likely a copy of an older document from theMiddle Kingdomof about 2000–1800 BC.[34]It is an instruction manual for students in arithmetic and geometry. In addition to giving area formulas and methods for multiplication, division and working with unit fractions, it also contains evidence of other mathematical knowledge,[35]includingcompositeandprime numbers;arithmetic,geometricandharmonic means; and simplistic understandings of both theSieve of Eratosthenesandperfect number theory(namely, that of the number 6).[36]It also shows how to solve first orderlinear equations[37]as well asarithmeticandgeometric series.[38]
Another significant Egyptian mathematical text is theMoscow papyrus, also from theMiddle Kingdomperiod, dated to c. 1890 BC.[39]It consists of what are today calledword problemsorstory problems, which were apparently intended as entertainment. One problem is considered to be of particular importance because it gives a method for finding the volume of afrustum(truncated pyramid).
Finally, theBerlin Papyrus 6619(c. 1800 BC) shows that ancient Egyptians could solve a second-orderalgebraic equation.[40]
Greek mathematics refers to the mathematics written in theGreek languagefrom the time ofThales of Miletus(~600 BC) to the closure of theAcademy of Athensin 529 AD.[41]Greek mathematicians lived in cities spread over the entire Eastern Mediterranean, from Italy to North Africa, but were united by culture and language. Greek mathematics of the period followingAlexander the Greatis sometimes calledHellenisticmathematics.[42]
Greek mathematics was much more sophisticated than the mathematics that had been developed by earlier cultures. All surviving records of pre-Greek mathematics show the use ofinductive reasoning, that is, repeated observations used to establish rules of thumb. Greek mathematicians, by contrast, useddeductive reasoning. The Greeks used logic to derive conclusions from definitions and axioms, and usedmathematical rigorto prove them.[43]
Greek mathematics is thought to have begun withThales of Miletus(c. 624–c.546 BC) andPythagoras of Samos(c. 582–c. 507 BC). Although the extent of the influence is disputed, they were probably inspired byEgyptianandBabylonian mathematics. According to legend, Pythagoras traveled to Egypt to learn mathematics, geometry, and astronomy from Egyptian priests.
Thales usedgeometryto solve problems such as calculating the height ofpyramidsand the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries toThales' Theorem. As a result, he has been hailed as the first true mathematician and the first known individual to whom a mathematical discovery has been attributed.[44]Pythagoras established thePythagorean School, whose doctrine it was that mathematics ruled the universe and whose motto was "All is number".[45]It was the Pythagoreans who coined the term "mathematics", and with whom the study of mathematics for its own sake begins. The Pythagoreans are credited with the first proof of thePythagorean theorem,[46]though the statement of the theorem has a long history, and with the proof of the existence ofirrational numbers.[47][48]Although he was preceded by theBabylonians,Indiansand theChinese,[49]theNeopythagoreanmathematicianNicomachus(60–120 AD) provided one of the earliestGreco-Romanmultiplication tables, whereas the oldest extant Greek multiplication table is found on a wax tablet dated to the 1st century AD (now found in theBritish Museum).[50]The association of the Neopythagoreans with the Western invention of the multiplication table is evident in its laterMedievalname: themensa Pythagorica.[51]
Plato(428/427 BC – 348/347 BC) is important in the history of mathematics for inspiring and guiding others.[52]HisPlatonic Academy, inAthens, became the mathematical center of the world in the 4th century BC, and it was from this school that the leading mathematicians of the day, such asEudoxus of Cnidus(c. 390 - c. 340 BC), came.[53]Plato also discussed the foundations of mathematics,[54]clarified some of the definitions (e.g. that of a line as "breadthless length").
Eudoxus developed themethod of exhaustion, a precursor of modernintegration[55]and a theory of ratios that avoided the problem ofincommensurable magnitudes.[56]The former allowed the calculations of areas and volumes of curvilinear figures,[57]while the latter enabled subsequent geometers to make significant advances in geometry. Though he made no specific technical mathematical discoveries,Aristotle(384–c.322 BC) contributed significantly to the development of mathematics by laying the foundations oflogic.[58]
In the 3rd century BC, the premier center of mathematical education and research was theMusaeumofAlexandria.[60]It was there thatEuclid(c.300 BC) taught, and wrote theElements, widely considered the most successful and influential textbook of all time.[1]TheElementsintroducedmathematical rigorthrough theaxiomatic methodand is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of theElementswere already known, Euclid arranged them into a single, coherent logical framework.[61]TheElementswas known to all educated people in the West up through the middle of the 20th century and its contents are still taught in geometry classes today.[62]In addition to the familiar theorems ofEuclidean geometry, theElementswas meant as an introductory textbook to all mathematical subjects of the time, such asnumber theory,algebraandsolid geometry,[61]including proofs that the square root of two is irrational and that there are infinitely many prime numbers. Euclid alsowrote extensivelyon other subjects, such asconic sections,optics,spherical geometry, and mechanics, but only half of his writings survive.[63]
Archimedes(c.287–212 BC) ofSyracuse, widely considered the greatest mathematician of antiquity,[64]used themethod of exhaustionto calculate theareaunder the arc of aparabolawith thesummation of an infinite series, in a manner not too dissimilar from modern calculus.[65]He also showed one could use the method of exhaustion to calculate the value of π with as much precision as desired, and obtained the most accurate value of π then known,3+10/71< π < 3+10/70.[66]He also studied thespiralbearing his name, obtained formulas for thevolumesofsurfaces of revolution(paraboloid, ellipsoid, hyperboloid),[65]and an ingenious method ofexponentiationfor expressing very large numbers.[67]While he is also known for his contributions to physics and several advanced mechanical devices, Archimedes himself placed far greater value on the products of his thought and general mathematical principles.[68]He regarded as his greatest achievement his finding of the surface area and volume of a sphere, which he obtained by proving these are 2/3 the surface area and volume of a cylinder circumscribing the sphere.[69]
Apollonius of Perga(c.262–190 BC) made significant advances to the study ofconic sections, showing that one can obtain all three varieties of conic section by varying the angle of the plane that cuts a double-napped cone.[70]He also coined the terminology in use today for conic sections, namelyparabola("place beside" or "comparison"), "ellipse" ("deficiency"), and "hyperbola" ("a throw beyond").[71]His workConicsis one of the best known and preserved mathematical works from antiquity, and in it he derives many theorems concerning conic sections that would prove invaluable to later mathematicians and astronomers studying planetary motion, such as Isaac Newton.[72]While neither Apollonius nor any other Greek mathematicians made the leap to coordinate geometry, Apollonius' treatment of curves is in some ways similar to the modern treatment, and some of his work seems to anticipate the development of analytical geometry by Descartes some 1800 years later.[73]
Around the same time,Eratosthenes of Cyrene(c.276–194 BC) devised theSieve of Eratosthenesfor findingprime numbers.[74]The 3rd century BC is generally regarded as the "Golden Age" of Greek mathematics, with advances in pure mathematics henceforth in relative decline.[75]Nevertheless, in the centuries that followed significant advances were made in applied mathematics, most notablytrigonometry, largely to address the needs of astronomers.[75]Hipparchus of Nicaea(c.190–120 BC) is considered the founder of trigonometry for compiling the first known trigonometric table, and to him is also due the systematic use of the 360 degree circle.[76]Heron of Alexandria(c.10–70 AD) is credited withHeron's formulafor finding the area of a scalene triangle and with being the first to recognize the possibility of negative numbers possessing square roots.[77]Menelaus of Alexandria(c.100 AD) pioneeredspherical trigonometrythroughMenelaus' theorem.[78]The most complete and influential trigonometric work of antiquity is theAlmagestofPtolemy(c.AD 90–168), a landmark astronomical treatise whose trigonometric tables would be used by astronomers for the next thousand years.[79]Ptolemy is also credited withPtolemy's theoremfor deriving trigonometric quantities, and the most accurate value of π outside of China until the medieval period, 3.1416.[80]
Following a period of stagnation after Ptolemy, the period between 250 and 350 AD is sometimes referred to as the "Silver Age" of Greek mathematics.[81]During this period,Diophantusmade significant advances in algebra, particularlyindeterminate analysis, which is also known as "Diophantine analysis".[82]The study ofDiophantine equationsandDiophantine approximationsis a significant area of research to this day. His main work was theArithmetica, a collection of 150 algebraic problems dealing with exact solutions to determinate andindeterminate equations.[83]TheArithmeticahad a significant influence on later mathematicians, such asPierre de Fermat, who arrived at his famousLast Theoremafter trying to generalize a problem he had read in theArithmetica(that of dividing a square into two squares).[84]Diophantus also made significant advances in notation, theArithmeticabeing the first instance of algebraic symbolism and syncopation.[83]
Among the last great Greek mathematicians isPappus of Alexandria(4th century AD). He is known for hishexagon theoremandcentroid theorem, as well as thePappus configurationandPappus graph. HisCollectionis a major source of knowledge on Greek mathematics as most of it has survived.[85]Pappus is considered the last major innovator in Greek mathematics, with subsequent work consisting mostly of commentaries on earlier work.
The first woman mathematician recorded by history wasHypatiaof Alexandria (AD 350–415). She succeeded her father (Theon of Alexandria) as Librarian at the Great Library[citation needed]and wrote many works on applied mathematics. Because of a political dispute, theChristian communityin Alexandria had her stripped publicly and executed.[86]Her death is sometimes taken as the end of the era of the Alexandrian Greek mathematics, although work did continue in Athens for another century with figures such asProclus,SimpliciusandEutocius.[87]Although Proclus and Simplicius were more philosophers than mathematicians, their commentaries on earlier works are valuable sources on Greek mathematics. The closure of the neo-PlatonicAcademy of Athensby the emperorJustinianin 529 AD is traditionally held as marking the end of the era of Greek mathematics, although the Greek tradition continued unbroken in theByzantine empirewith mathematicians such asAnthemius of TrallesandIsidore of Miletus, the architects of theHagia Sophia.[88]Nevertheless, Byzantine mathematics consisted mostly of commentaries, with little in the way of innovation, and the centers of mathematical innovation were to be found elsewhere by this time.[89]
Althoughethnic Greekmathematicians continued under the rule of the lateRoman Republicand subsequentRoman Empire, there were no noteworthynative Latinmathematicians in comparison.[90][91]Ancient Romanssuch asCicero(106–43 BC), an influential Roman statesman who studied mathematics in Greece, believed that Romansurveyorsandcalculatorswere far more interested inapplied mathematicsthan thetheoretical mathematicsand geometry that were prized by the Greeks.[92]It is unclear if the Romans first derivedtheir numerical systemdirectly fromthe Greek precedentor fromEtruscan numeralsused by theEtruscan civilizationcentered in what is nowTuscany,central Italy.[93]
Using calculation, Romans were adept at both instigating and detecting financialfraud, as well asmanaging taxesfor thetreasury.[94]Siculus Flaccus, one of the Romangromatici(i.e. land surveyor), wrote theCategories of Fields, which aided Roman surveyors in measuring thesurface areasof allotted lands and territories.[95]Aside from managing trade and taxes, the Romans also regularly applied mathematics to solve problems inengineering, including the erection ofarchitecturesuch asbridges,road-building, andpreparation for military campaigns.[96]Arts and craftssuch asRoman mosaics, inspired by previousGreek designs, created illusionist geometric patterns and rich, detailed scenes that required precise measurements for eachtesseratile, theopus tessellatumpieces on average measuring eight millimeters square and the fineropus vermiculatumpieces having an average surface of four millimeters square.[97][98]
The creation of theRoman calendaralso necessitated basic mathematics. The first calendar allegedly dates back to 8th century BC during theRoman Kingdomand included 356 days plus aleap yearevery other year.[99]In contrast, thelunar calendarof the Republican era contained 355 days, roughly ten-and-one-fourth days shorter than thesolar year, a discrepancy that was solved by adding an extra month into the calendar after the 23rd of February.[100]This calendar was supplanted by theJulian calendar, asolar calendarorganized byJulius Caesar(100–44 BC) and devised bySosigenes of Alexandriato include aleap dayevery four years in a 365-day cycle.[101]This calendar, which contained an error of 11 minutes and 14 seconds, was later corrected by theGregorian calendarorganized byPope Gregory XIII(r.1572–1585), virtually the same solar calendar used in modern times as the international standard calendar.[102]
At roughly the same time,the Han Chineseand the Romans both invented the wheeledodometerdevice for measuringdistancestraveled, the Roman model first described by the Roman civil engineer and architectVitruvius(c.80 BC– c.15 BC).[103]The device was used at least until the reign of emperorCommodus(r.177 – 192 AD), but its design seems to have been lost until experiments were made during the 15th century in Western Europe.[104]Perhaps relying on similar gear-work andtechnologyfound in theAntikythera mechanism, the odometer of Vitruvius featured chariot wheels measuring 4 feet (1.2 m) in diameter turning four-hundred times in oneRoman mile(roughly 4590 ft/1400 m). With each revolution, a pin-and-axle device engaged a 400-toothcogwheelthat turned a second gear responsible for dropping pebbles into a box, each pebble representing one mile traversed.[105]
An analysis of early Chinese mathematics has demonstrated its unique development compared to other parts of the world, leading scholars to assume an entirely independent development.[106]The oldest extant mathematical text from China is theZhoubi Suanjing(周髀算經), variously dated to between 1200 BC and 100 BC, though a date of about 300 BC during theWarring States Periodappears reasonable.[107]However, theTsinghua Bamboo Slips, containing the earliest knowndecimalmultiplication table(although ancient Babylonians had ones with a base of 60), is dated around 305 BC and is perhaps the oldest surviving mathematical text of China.[49]
Of particular note is the use in Chinese mathematics of a decimal positional notation system, the so-called "rod numerals" in which distinct ciphers were used for numbers between 1 and 10, and additional ciphers for powers of ten.[108]Thus, the number 123 would be written using the symbol for "1", followed by the symbol for "100", then the symbol for "2" followed by the symbol for "10", followed by the symbol for "3". This was the most advanced number system in the world at the time, apparently in use several centuries before the common era and well before the development of the Indian numeral system.[109]Rod numeralsallowed the representation of numbers as large as desired and allowed calculations to be carried out on thesuan pan, or Chinese abacus. The date of the invention of thesuan panis not certain, but the earliest written mention dates from AD 190, inXu Yue'sSupplementary Notes on the Art of Figures.
The oldest extant work on geometry in China comes from the philosophicalMohistcanonc.330 BC, compiled by the followers ofMozi(470–390 BC). TheMo Jingdescribed various aspects of many fields associated with physical science, and provided a small number of geometrical theorems as well.[110]It also defined the concepts ofcircumference,diameter,radius, andvolume.[111]
In 212 BC, the EmperorQin Shi Huangcommanded all books in theQin Empireother than officially sanctioned ones be burned. This decree was not universally obeyed, but as a consequence of this order little is known about ancient Chinese mathematics before this date. After thebook burningof 212 BC, theHan dynasty(202 BC–220 AD) produced works of mathematics which presumably expanded on works that are now lost. The most important of these isThe Nine Chapters on the Mathematical Art, the full title of which appeared by AD 179, but existed in part under other titles beforehand. It consists of 246 word problems involving agriculture, business, employment of geometry to figure height spans and dimension ratios forChinese pagodatowers, engineering,surveying, and includes material onright triangles.[107]It created mathematical proof for thePythagorean theorem,[112]and a mathematical formula forGaussian elimination.[113]The treatise also provides values ofπ,[107]which Chinese mathematicians originally approximated as 3 untilLiu Xin(d. 23 AD) provided a figure of 3.1457 and subsequentlyZhang Heng(78–139) approximated pi as 3.1724,[114]as well as 3.162 by taking thesquare rootof 10.[115][116]Liu Huicommented on theNine Chaptersin the 3rd century AD andgave a value of πaccurate to 5 decimal places (i.e. 3.14159).[117][118]Though more of a matter of computational stamina than theoretical insight, in the 5th century ADZu Chongzhicomputedthe value of πto seven decimal places (between 3.1415926 and 3.1415927), which remained the most accurate value of π for almost the next 1000 years.[117][119]He also established a method which would later be calledCavalieri's principleto find the volume of asphere.[120]
The high-water mark of Chinese mathematics occurred in the 13th century during the latter half of theSong dynasty(960–1279), with the development of Chinese algebra. The most important text from that period is thePrecious Mirror of the Four ElementsbyZhu Shijie(1249–1314), dealing with the solution of simultaneous higher order algebraic equations using a method similar toHorner's method.[117]ThePrecious Mirroralso contains a diagram ofPascal's trianglewith coefficients of binomial expansions through the eighth power, though both appear in Chinese works as early as 1100.[121]The Chinese also made use of the complex combinatorial diagram known as themagic squareandmagic circles, described in ancient times and perfected byYang Hui(AD 1238–1298).[121]
Even after European mathematics began to flourish during theRenaissance, European and Chinese mathematics were separate traditions, with significant Chinese mathematical output in decline from the 13th century onwards.Jesuitmissionaries such asMatteo Riccicarried mathematical ideas back and forth between the two cultures from the 16th to 18th centuries, though at this point far more mathematical ideas were entering China than leaving.[121]
Japanese mathematics,Korean mathematics, andVietnamese mathematicsare traditionally viewed as stemming from Chinese mathematics and belonging to theConfucian-basedEast Asian cultural sphere.[122]Korean and Japanese mathematics were heavily influenced by the algebraic works produced during China's Song dynasty, whereas Vietnamese mathematics was heavily indebted to popular works of China'sMing dynasty(1368–1644).[123]For instance, although Vietnamese mathematical treatises were written in eitherChineseor the native VietnameseChữ Nômscript, all of them followed the Chinese format of presenting a collection of problems withalgorithmsfor solving them, followed by numerical answers.[124]Mathematics in Vietnam and Korea were mostly associated with the professional court bureaucracy ofmathematicians and astronomers, whereas in Japan it was more prevalent in the realm ofprivate schools.[125]
The earliest civilization on the Indian subcontinent is theIndus Valley civilization(mature second phase: 2600 to 1900 BC) that flourished in theIndus riverbasin. Their cities were laid out with geometric regularity, but no known mathematical documents survive from this civilization.[127]
The oldest extant mathematical records from India are theSulba Sutras(dated variously between the 8th century BC and the 2nd century AD),[128]appendices to religious texts which give simple rules for constructing altars of various shapes, such as squares, rectangles, parallelograms, and others.[129]As with Egypt, the preoccupation with temple functions points to an origin of mathematics in religious ritual.[128]The Sulba Sutras give methods for constructing acircle with approximately the same area as a given square, which imply several different approximations of the value of π.[130][131][a]In addition, they compute thesquare rootof 2 to several decimal places, list Pythagorean triples, and give a statement of thePythagorean theorem.[131]All of these results are present in Babylonian mathematics, indicating Mesopotamian influence.[128]It is not known to what extent the Sulba Sutras influenced later Indian mathematicians. As in China, there is a lack of continuity in Indian mathematics; significant advances are separated by long periods of inactivity.[128]
Pāṇini(c. 5th century BC) formulated the rules forSanskrit grammar.[132]His notation was similar to modern mathematical notation, and used metarules,transformations, andrecursion.[133]Pingala(roughly 3rd–1st centuries BC) in his treatise ofprosodyuses a device corresponding to abinary numeral system.[134][135]His discussion of thecombinatoricsofmeterscorresponds to an elementary version of thebinomial theorem. Pingala's work also contains the basic ideas ofFibonacci numbers(calledmātrāmeru).[136]
The next significant mathematical documents from India after theSulba Sutrasare theSiddhantas, astronomical treatises from the 4th and 5th centuries AD (Gupta period) showing strong Hellenistic influence.[137]They are significant in that they contain the first instance of trigonometric relations based on the half-chord, as is the case in modern trigonometry, rather than the full chord, as was the case in Ptolemaic trigonometry.[138]Through a series of translation errors, the words "sine" and "cosine" derive from the Sanskrit "jiya" and "kojiya".[138]
Around 500 AD,Aryabhatawrote theAryabhatiya, a slim volume, written in verse, intended to supplement the rules of calculation used in astronomy and mathematical mensuration, though with no feeling for logic or deductive methodology.[139]It is in theAryabhatiyathat the decimal place-value system first appears. Several centuries later, theMuslim mathematicianAbu Rayhan Birunidescribed theAryabhatiyaas a "mix of common pebbles and costly crystals".[140]
In the 7th century,Brahmaguptaidentified theBrahmagupta theorem,Brahmagupta's identityandBrahmagupta's formula, and for the first time, inBrahma-sphuta-siddhanta, he lucidly explained the use ofzeroas both a placeholder anddecimal digit, and explained theHindu–Arabic numeral system.[141]It was from a translation of this Indian text on mathematics (c. 770) that Islamic mathematicians were introduced to this numeral system, which they adapted asArabic numerals. Islamic scholars carried knowledge of this number system to Europe by the 12th century, and it has now displaced all older number systems throughout the world. Various symbol sets are used to represent numbers in the Hindu–Arabic numeral system, all of which evolved from theBrahmi numerals. Each of the roughly dozen major scripts of India has its own numeral glyphs. In the 10th century,Halayudha's commentary onPingala's work contains a study of theFibonacci sequence[142]andPascal's triangle,[143]and describes the formation of amatrix.[citation needed]
In the 12th century,Bhāskara II,[144]who lived in southern India, wrote extensively on all then known branches of mathematics. His work contains mathematical objects equivalent or approximately equivalent to infinitesimals,the mean value theoremand the derivative of the sine function although he did not develop the notion of a derivative.[145][146]In the 14th century,Narayana Panditacompleted hisGanita Kaumudi.[147]
Also in the 14th century,Madhava of Sangamagrama, the founder of theKerala School of Mathematics, found theMadhava–Leibniz seriesand obtained from it atransformed series, whose first 21 terms he used to compute the value of π as 3.14159265359. Madhava also foundthe Madhava-Gregory seriesto determine the arctangent, the Madhava-Newtonpower seriesto determine sine and cosine andthe Taylor approximationfor sine and cosine functions.[148]In the 16th century,Jyesthadevaconsolidated many of the Kerala School's developments and theorems in theYukti-bhāṣā.[149][150]It has been argued that certain ideas of calculus like infinite series and taylor series of some trigonometry functions, were transmitted to Europe in the 16th century[6]viaJesuitmissionaries and traders who were active around the ancient port ofMuzirisat the time and, as a result, directly influenced later European developments in analysis and calculus.[151]However, other scholars argue that the Kerala School did not formulate a systematic theory ofdifferentiationandintegration, and that there is not any direct evidence of their results being transmitted outside Kerala.[152][153][154][155]
TheIslamic Empireestablished across theMiddle East,Central Asia,North Africa,Iberia, and in parts ofIndiain the 8th century made significant contributions towards mathematics. Although most Islamic texts on mathematics were written inArabic, they were not all written byArabs, since much like the status of Greek in the Hellenistic world, Arabic was used as the written language of non-Arab scholars throughout the Islamic world at the time.[156]
In the 9th century, the Persian mathematicianMuḥammad ibn Mūsā al-Khwārizmīwrote an important book on theHindu–Arabic numeralsand one on methods for solving equations. His bookOn the Calculation with Hindu Numerals, written about 825, along with the work ofAl-Kindi, were instrumental in spreadingIndian mathematicsandIndian numeralsto the West. The wordalgorithmis derived from the Latinization of his name, Algoritmi, and the wordalgebrafrom the title of one of his works,Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa’l-muqābala(The Compendious Book on Calculation by Completion and Balancing). He gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots,[157]and he was the first to teach algebra in anelementary formand for its own sake.[158]He also discussed the fundamental method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which al-Khwārizmī originally described asal-jabr.[159]His algebra was also no longer concerned "with a series of problems to be resolved, but anexpositionwhich starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems."[160]
In Egypt,Abu Kamilextended algebra to the set ofirrational numbers, accepting square roots and fourth roots as solutions and coefficients to quadratic equations. He also developed techniques used to solve three non-linear simultaneous equations with three unknown variables. One unique feature of his works was trying to find all the possible solutions to some of his problems, including one where he found 2676 solutions.[161]His works formed an important foundation for the development of algebra and influenced later mathematicians, such as al-Karaji and Fibonacci.
Further developments in algebra were made byAl-Karajiin his treatiseal-Fakhri, where he extends the methodology to incorporate integer powers and integer roots of unknown quantities. Something close to aproofbymathematical inductionappears in a book written by Al-Karaji around 1000 AD, who used it to prove thebinomial theorem,Pascal's triangle, and the sum of integralcubes.[162]Thehistorianof mathematics, F. Woepcke,[163]praised Al-Karaji for being "the first who introduced thetheoryofalgebraiccalculus." Also in the 10th century,Abul Wafatranslated the works ofDiophantusinto Arabic.Ibn al-Haythamwas the first mathematician to derive the formula for the sum of the fourth powers, using a method that is readily generalizable for determining the general formula for the sum of any integral powers. He performed an integration in order to find the volume of aparaboloid, and was able to generalize his result for the integrals ofpolynomialsup to thefourth degree. He thus came close to finding a general formula for the integrals of polynomials, but he was not concerned with any polynomials higher than the fourth degree.[164]
In the late 11th century,Omar KhayyamwroteDiscussions of the Difficulties in Euclid, a book about what he perceived as flaws inEuclid'sElements, especially theparallel postulate. He was also the first to find the general geometric solution tocubic equations. He was also very influential incalendar reform.[165]
In the 13th century,Nasir al-Din Tusi(Nasireddin) made advances inspherical trigonometry. He also wrote influential work on Euclid'sparallel postulate. In the 15th century,Ghiyath al-Kashicomputed the value of π to the 16th decimal place. Kashi also had an algorithm for calculatingnth roots, which was a special case of the methods given many centuries later byRuffiniandHorner.
Other achievements of Muslim mathematicians during this period include the addition of thedecimal pointnotation to theArabic numerals, the discovery of all the moderntrigonometric functionsbesides the sine,al-Kindi's introduction ofcryptanalysisandfrequency analysis, the development ofanalytic geometrybyIbn al-Haytham, the beginning ofalgebraic geometrybyOmar Khayyamand the development of analgebraic notationbyal-Qalasādī.[166]
During the time of theOttoman EmpireandSafavid Empirefrom the 15th century, the development of Islamic mathematics became stagnant.
In thePre-Columbian Americas, theMaya civilizationthat flourished inMexicoandCentral Americaduring the 1st millennium AD developed a unique tradition of mathematics that, due to its geographic isolation, was entirely independent of existing European, Egyptian, and Asian mathematics.[167]Maya numeralsused abaseof twenty, thevigesimalsystem, instead of a base of ten that forms the basis of thedecimalsystem used by most modern cultures.[167]The Maya used mathematics to create theMaya calendaras well as to predict astronomical phenomena in their nativeMaya astronomy.[167]While the concept ofzerohad to be inferred in the mathematics of many contemporary cultures, the Maya developed a standard symbol for it.[167]
Medieval European interest in mathematics was driven by concerns quite different from those of modern mathematicians. One driving element was the belief that mathematics provided the key to understanding the created order of nature, frequently justified byPlato'sTimaeusand the biblical passage (in theBook of Wisdom) that God hadordered all things in measure, and number, and weight.[168]
Boethiusprovided a place for mathematics in the curriculum in the 6th century when he coined the termquadriviumto describe the study of arithmetic, geometry, astronomy, and music. He wroteDe institutione arithmetica, a free translation from the Greek ofNicomachus'sIntroduction to Arithmetic;De institutione musica, also derived from Greek sources; and a series of excerpts from Euclid'sElements. His works were theoretical, rather than practical, and were the basis of mathematical study until the recovery of Greek and Arabic mathematical works.[169][170]
In the 12th century, European scholars traveled to Spain and Sicilyseeking scientific Arabic texts, includingal-Khwārizmī'sThe Compendious Book on Calculation by Completion and Balancing, translated into Latin byRobert of Chester, and the complete text of Euclid'sElements, translated in various versions byAdelard of Bath,Herman of Carinthia, andGerard of Cremona.[171][172]These and other new sources sparked a renewal of mathematics.
Leonardo of Pisa, now known asFibonacci, serendipitously learned about theHindu–Arabic numeralson a trip to what is nowBéjaïa,Algeriawith his merchant father. (Europe was still usingRoman numerals.) There, he observed a system ofarithmetic(specificallyalgorism) which due to thepositional notationof Hindu–Arabic numerals was much more efficient and greatly facilitated commerce. Leonardo wroteLiber Abaciin 1202 (updated in 1254) introducing the technique to Europe and beginning a long period of popularizing it. The book also brought to Europe what is now known as theFibonacci sequence(known to Indian mathematicians for hundreds of years before that)[173]which Fibonacci used as an unremarkable example.
The 14th century saw the development of new mathematical concepts to investigate a wide range of problems.[174]One important contribution was development of mathematics of local motion.
Thomas Bradwardineproposed that speed (V) increases in arithmetic proportion as the ratio of force (F) to resistance (R) increases in geometric proportion. Bradwardine expressed this by a series of specific examples, but although the logarithm had not yet been conceived, we can express his conclusion anachronistically by writing:
V = log (F/R).[175]Bradwardine's analysis is an example of transferring a mathematical technique used byal-KindiandArnald of Villanovato quantify the nature of compound medicines to a different physical problem.[176]
One of the 14th-centuryOxford Calculators,William Heytesbury, lackingdifferential calculusand the concept oflimits, proposed to measure instantaneous speed "by the path thatwouldbe described by [a body]if... it were moved uniformly at the same degree of speed with which it is moved in that given instant".[179]
Heytesbury and others mathematically determined the distance covered by a body undergoing uniformly accelerated motion (today solved by integration), stating that "a moving body uniformly acquiring or losing that increment [of speed] will traverse in some given time a [distance] completely equal to that which it would traverse if it were moving continuously through the same time with the mean degree [of speed]".[180]
Nicole Oresmeat theUniversity of Parisand the ItalianGiovanni di Casaliindependently provided graphical demonstrations of this relationship, asserting that the area under the line depicting the constant acceleration, represented the total distance traveled.[181]In a later mathematical commentary on Euclid'sElements, Oresme made a more detailed general analysis in which he demonstrated that a body will acquire in each successive increment of time an increment of any quality that increases as the odd numbers. Since Euclid had demonstrated the sum of the odd numbers are the square numbers, the total quality acquired by the body increases as the square of the time.[182]
During theRenaissance, the development of mathematics and ofaccountingwere intertwined.[183]While there is no direct relationship between algebra and accounting, the teaching of the subjects and the books published often intended for the children of merchants who were sent to reckoning schools (inFlandersandGermany) orabacus schools(known asabbacoin Italy), where they learned the skills useful for trade and commerce. There is probably no need for algebra in performingbookkeepingoperations, but for complex bartering operations or the calculation ofcompound interest, a basic knowledge of arithmetic was mandatory and knowledge of algebra was very useful.
Piero della Francesca(c. 1415–1492) wrote books onsolid geometryandlinear perspective, includingDe Prospectiva Pingendi(On Perspective for Painting),Trattato d’Abaco (Abacus Treatise), andDe quinque corporibus regularibus(On the Five Regular Solids).[184][185][186]
Luca Pacioli'sSumma de Arithmetica, Geometria, Proportioni et Proportionalità(Italian: "Review ofArithmetic,Geometry,RatioandProportion") was first printed and published inVenicein 1494. It included a 27-page treatise on bookkeeping,"Particularis de Computis et Scripturis"(Italian: "Details of Calculation and Recording"). It was written primarily for, and sold mainly to, merchants who used the book as a reference text, as a source of pleasure from themathematical puzzlesit contained, and to aid the education of their sons.[187]InSumma Arithmetica, Pacioli introduced symbols forplus and minusfor the first time in a printed book, symbols that became standard notation in Italian Renaissance mathematics.Summa Arithmeticawas also the first known book printed in Italy to contain algebra. Pacioli obtained many of his ideas from Piero Della Francesca whom he plagiarized.
In Italy, during the first half of the 16th century,Scipione del FerroandNiccolò Fontana Tartagliadiscovered solutions forcubic equations.Gerolamo Cardanopublished them in his 1545 bookArs Magna, together with a solution for thequartic equations, discovered by his studentLodovico Ferrari. In 1572Rafael Bombellipublished hisL'Algebrain which he showed how to deal with theimaginary quantitiesthat could appear in Cardano's formula for solving cubic equations.
Simon Stevin'sDe Thiende('the art of tenths'), first published in Dutch in 1585, contained the first systematic treatment ofdecimal notationin Europe, which influenced all later work on thereal number system.[188][189]
Driven by the demands of navigation and the growing need for accurate maps of large areas,trigonometrygrew to be a major branch of mathematics.Bartholomaeus Pitiscuswas the first to use the word, publishing hisTrigonometriain 1595. Regiomontanus's table of sines and cosines was published in 1533.[190]
During the Renaissance the desire of artists to represent the natural world realistically, together with the rediscovered philosophy of the Greeks, led artists to study mathematics. They were also the engineers and architects of that time, and so had need of mathematics in any case. The art of painting in perspective, and the developments in geometry that were involved, were studied intensely.[191]
The 17th century saw an unprecedented increase of mathematical and scientific ideas across Europe.Tycho Brahehad gathered a large quantity of mathematical data describing the positions of the planets in the sky. By his position as Brahe's assistant,Johannes Keplerwas first exposed to and seriously interacted with the topic of planetary motion. Kepler's calculations were made simpler by the contemporaneous invention oflogarithmsbyJohn NapierandJost Bürgi. Kepler succeeded in formulating mathematical laws of planetary motion.[192]Theanalytic geometrydeveloped byRené Descartes(1596–1650) allowed those orbits to be plotted on a graph, inCartesian coordinates.
Building on earlier work by many predecessors,Isaac Newtondiscovered the laws of physics that explainKepler's Laws, and brought together the concepts now known ascalculus. Independently,Gottfried Wilhelm Leibniz, developed calculus and much of the calculus notation still in use today. He also refined thebinary numbersystem, which is the foundation of nearly all digital (electronic,solid-state,discrete logic)computers.[193]
Science and mathematics had become an international endeavor, which would soon spread over the entire world.[194]
In addition to the application of mathematics to the studies of the heavens,applied mathematicsbegan to expand into new areas, with the correspondence ofPierre de FermatandBlaise Pascal. Pascal and Fermat set the groundwork for the investigations ofprobability theoryand the corresponding rules ofcombinatoricsin their discussions over a game ofgambling. Pascal, with hiswager, attempted to use the newly developing probability theory to argue for a life devoted to religion, on the grounds that even if the probability of success was small, the rewards were infinite. In some sense, this foreshadowed the development ofutility theoryin the 18th and 19th centuries.
The most influential mathematician of the 18th century was arguablyLeonhard Euler(1707–1783). His contributions range from founding the study ofgraph theorywith theSeven Bridges of Königsbergproblem to standardizing many modern mathematical terms and notations. For example, he named the square root of minus 1 with the symboli, and he popularized the use of the Greek letterπ{\displaystyle \pi }to stand for the ratio of a circle's circumference to its diameter. He made numerous contributions to the study of topology, graph theory, calculus, combinatorics, and complex analysis, as evidenced by the multitude of theorems and notations named for him.
Other important European mathematicians of the 18th century includedJoseph Louis Lagrange, who did pioneering work in number theory, algebra, differential calculus, and the calculus of variations, andPierre-Simon Laplace, who, in the age ofNapoleon, did important work on the foundations ofcelestial mechanicsand onstatistics.
Throughout the 19th century mathematics became increasingly abstract.[195]Carl Friedrich Gauss(1777–1855) epitomizes this trend.[citation needed]He did revolutionary work onfunctionsofcomplex variables, ingeometry, and on the convergence ofseries, leaving aside his many contributions to science. He also gave the first satisfactory proofs of thefundamental theorem of algebraand of thequadratic reciprocity law.[citation needed]
This century saw the development of the two forms ofnon-Euclidean geometry, where theparallel postulateof Euclidean geometry no longer holds.
The Russian mathematicianNikolai Ivanovich Lobachevskyand his rival, the Hungarian mathematicianJános Bolyai, independently defined and studiedhyperbolic geometry, where uniqueness of parallels no longer holds. In this geometry the sum of angles in a triangle add up to less than 180°.Elliptic geometrywas developed later in the 19th century by the German mathematicianBernhard Riemann; here no parallel can be found and the angles in a triangle add up to more than 180°. Riemann also developedRiemannian geometry, which unifies and vastly generalizes the three types of geometry, and he defined the concept of amanifold, which generalizes the ideas ofcurvesandsurfaces, and set the mathematical foundations for thetheory of general relativity.[196]
The 19th century saw the beginning of a great deal ofabstract algebra.Hermann Grassmannin Germany gave a first version ofvector spaces,William Rowan Hamiltonin Ireland developednoncommutative algebra.[citation needed]The British mathematicianGeorge Booledevised an algebra that soon evolved into what is now calledBoolean algebra, in which the only numbers were 0 and 1. Boolean algebra is the starting point ofmathematical logicand has important applications inelectrical engineeringandcomputer science.[citation needed][197]Augustin-Louis Cauchy,Bernhard Riemann, andKarl Weierstrassreformulated the calculus in a more rigorous fashion.[citation needed]
Also, for the first time, the limits of mathematics were explored.Niels Henrik Abel, a Norwegian, andÉvariste Galois, a Frenchman, proved that there is no general algebraic method for solving polynomial equations of degree greater than four (Abel–Ruffini theorem).[198]Other 19th-century mathematicians used this in their proofs that straight edge and compass alone are not sufficient totrisect an arbitrary angle, to construct the side of a cube twice the volume of a given cube,nor to construct a square equal in area to a given circle.[citation needed]Mathematicians had vainly attempted to solve all of these problems since the time of the ancient Greeks.[citation needed]On the other hand, the limitation of threedimensionsin geometry was surpassed in the 19th century through considerations ofparameter spaceandhypercomplex numbers.[citation needed]
Abel and Galois's investigations into the solutions of various polynomial equations laid the groundwork for further developments ofgroup theory, and the associated fields ofabstract algebra. In the 20th century physicists and other scientists have seen group theory as the ideal way to studysymmetry.[citation needed]
In the later 19th century,Georg Cantorestablished the first foundations ofset theory, which enabled the rigorous treatment of the notion of infinity and has become the common language of nearly all mathematics. Cantor's set theory, and the rise ofmathematical logicin the hands ofPeano,L.E.J. Brouwer,David Hilbert,Bertrand Russell, andA.N. Whitehead, initiated a long running debate on thefoundations of mathematics.[citation needed]
The 19th century saw the founding of a number of national mathematical societies: theLondon Mathematical Societyin 1865,[199]theSociété Mathématique de Francein 1872,[200]theCircolo Matematico di Palermoin 1884,[201][202]theEdinburgh Mathematical Societyin 1883,[203]and theAmerican Mathematical Societyin 1888.[204]The first international, special-interest society, theQuaternion Society, was formed in 1899, in the context of avector controversy.[205]
In 1897,Kurt Henselintroducedp-adic numbers.[206]
The 20th century saw mathematics become a major profession. By the end of the century, thousands of new Ph.D.s in mathematics were being awarded every year, and jobs were available in both teaching and industry.[207]An effort to catalogue the areas and applications of mathematics was undertaken inKlein's encyclopedia.[208]
In a 1900 speech to theInternational Congress of Mathematicians,David Hilbertset out a list of23 unsolved problems in mathematics.[209]These problems, spanning many areas of mathematics, formed a central focus for much of 20th-century mathematics. Today, 10 have been solved, 7 are partially solved, and 2 are still open. The remaining 4 are too loosely formulated to be stated as solved or not.[210]
Notable historical conjectures were finally proven. In 1976,Wolfgang HakenandKenneth Appelproved thefour color theorem, controversial at the time for the use of a computer to do so.[211]Andrew Wiles, building on the work of others, provedFermat's Last Theoremin 1995.[212]Paul CohenandKurt Gödelproved that thecontinuum hypothesisisindependentof (could neither be proved nor disproved from) thestandard axioms of set theory.[213]In 1998,Thomas Callister Halesproved theKepler conjecture, also using a computer.[214]
Mathematical collaborations of unprecedented size and scope took place. An example is theclassification of finite simple groups(also called the "enormous theorem"), whose proof between 1955 and 2004 required 500-odd journal articles by about 100 authors, and filling tens of thousands of pages.[215]A group of French mathematicians, includingJean DieudonnéandAndré Weil, publishing under thepseudonym"Nicolas Bourbaki", attempted to exposit all of known mathematics as a coherent rigorous whole. The resulting several dozen volumes has had a controversial influence on mathematical education.[216]
Differential geometrycame into its own whenAlbert Einsteinused it ingeneral relativity.[citation needed]Entirely new areas of mathematics such asmathematical logic,topology, andJohn von Neumann'sgame theorychanged the kinds of questions that could be answered by mathematical methods.[citation needed]All kinds ofstructureswere abstracted using axioms and given names likemetric spaces,topological spacesetc.[citation needed]As mathematicians do, the concept of an abstract structure was itself abstracted and led tocategory theory.[citation needed]GrothendieckandSerrerecastalgebraic geometryusingsheaf theory.[citation needed]Large advances were made in the qualitative study ofdynamical systemsthatPoincaréhad begun in the 1890s.[citation needed]Measure theorywas developed in the late 19th and early 20th centuries. Applications of measures include theLebesgue integral,Kolmogorov's axiomatisation ofprobability theory, andergodic theory.[citation needed]Knot theorygreatly expanded.[citation needed]Quantum mechanicsled to the development offunctional analysis,[citation needed]a branch of mathematics that was greatly developed byStefan Banachand his collaborators who formed theLwów School of Mathematics.[217]Other new areas includeLaurent Schwartz'sdistribution theory,fixed point theory,singularity theoryandRené Thom'scatastrophe theory,model theory, andMandelbrot'sfractals.[citation needed]Lie theorywith itsLie groupsandLie algebrasbecame one of the major areas of study.[218]
Non-standard analysis, introduced byAbraham Robinson, rehabilitated theinfinitesimalapproach to calculus, which had fallen into disrepute in favour of the theory oflimits, by extending the field of real numbers to theHyperreal numberswhich include infinitesimal and infinite quantities.[citation needed]An even larger number system, thesurreal numberswere discovered byJohn Horton Conwayin connection withcombinatorial games.[citation needed]
The development and continual improvement ofcomputers, at first mechanical analog machines and then digital electronic machines, allowedindustryto deal with larger and larger amounts of data to facilitate mass production and distribution and communication, and new areas of mathematics were developed to deal with this:Alan Turing'scomputability theory;complexity theory;Derrick Henry Lehmer's use ofENIACto further number theory and theLucas–Lehmer primality test;Rózsa Péter'srecursive function theory;Claude Shannon'sinformation theory;signal processing;data analysis;optimizationand other areas ofoperations research.[citation needed]In the preceding centuries much mathematical focus was on calculus and continuous functions, but the rise of computing and communication networks led to an increasing importance ofdiscreteconcepts and the expansion ofcombinatoricsincludinggraph theory. The speed and data processing abilities of computers also enabled the handling of mathematical problems that were too time-consuming to deal with by pencil and paper calculations, leading to areas such asnumerical analysisandsymbolic computation.[citation needed]Some of the most important methods andalgorithmsof the 20th century are: thesimplex algorithm, thefast Fourier transform,error-correcting codes, theKalman filterfromcontrol theoryand theRSA algorithmofpublic-key cryptography.[citation needed]
At the same time, deep insights were made about the limitations to mathematics. In 1929 and 1930, it was proved[by whom?]the truth or falsity of all statements formulated about thenatural numbersplus either addition or multiplication (but not both), wasdecidable, i.e. could be determined by some algorithm.[citation needed]In 1931,Kurt Gödelfound that this was not the case for the natural numbers plus both addition and multiplication; this system, known asPeano arithmetic, was in factincomplete. (Peano arithmetic is adequate for a good deal ofnumber theory, including the notion ofprime number.) A consequence of Gödel's twoincompleteness theoremsis that in any mathematical system that includes Peano arithmetic (including all ofanalysisand geometry), truth necessarily outruns proof, i.e. there are true statements thatcannot be provedwithin the system. Hence mathematics cannot be reduced to mathematical logic, andDavid Hilbert's dream of making all of mathematics complete and consistent needed to be reformulated.[citation needed]
One of the more colorful figures in 20th-century mathematics wasSrinivasa Aiyangar Ramanujan(1887–1920), an Indianautodidact[219]who conjectured or proved over 3000 theorems[citation needed], including properties ofhighly composite numbers,[220]thepartition function[219]and itsasymptotics,[221]andmock theta functions.[219]He also made major investigations in the areas ofgamma functions,[222][223]modular forms,[219]divergent series,[219]hypergeometric series[219]and prime number theory.[219]
Paul Erdőspublished more papers than any other mathematician in history,[224]working with hundreds of collaborators. Mathematicians have a game equivalent to theKevin Bacon Game, which leads to theErdős numberof a mathematician. This describes the "collaborative distance" between a person and Erdős, as measured by joint authorship of mathematical papers.[225][226]
Emmy Noetherhas been described by many as the most important woman in the history of mathematics.[227]She studied the theories ofrings,fields, andalgebras.[228]
As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: by the end of the century, there were hundreds of specialized areas in mathematics, and theMathematics Subject Classificationwas dozens of pages long.[229]More and moremathematical journalswere published and, by the end of the century, the development of theWorld Wide Webled to online publishing.[citation needed]
In 2000, theClay Mathematics Instituteannounced the sevenMillennium Prize Problems.[230]In 2003 thePoincaré conjecturewas solved byGrigori Perelman(who declined to accept an award, as he was critical of the mathematics establishment).[231]
Most mathematical journals now have online versions as well as print versions, and many online-only journals are launched.[232][233]There is an increasing drive towardopen access publishing, first made popular byarXiv.[citation needed]
There are many observable trends in mathematics, the most notable being that the subject is growing ever larger as computers are ever more important and powerful; the volume of data being produced by science and industry, facilitated by computers, continues expanding exponentially. As a result, there is a corresponding growth in the demand for mathematics to help process and understand thisbig data.[234]Math science careers are also expected to continue to grow, with the USBureau of Labor Statisticsestimating (in 2018) that "employment of mathematical science occupations is projected to grow 27.9 percent from 2016 to 2026."[235]
|
https://en.wikipedia.org/wiki/History_of_mathematics
|
TheInternational Association for Cryptologic Research(IACR) is a non-profit scientific organization that furthers research incryptologyand related fields. The IACR was organized at the initiative ofDavid Chaumat the CRYPTO '82 conference.[1]
The IACR organizes and sponsors three annual flagshipconferences, four area conferences in specific sub-areas of cryptography, and one symposium:[2]
Several other conferences and workshops are held in cooperation with the IACR. Starting in 2015, selected summer schools will be officially sponsored by the IACR. CRYPTO '83 was the first conference officially sponsored by the IACR.
The IACR publishes theJournal of Cryptology, in addition to the proceedings of its conference and workshops. The IACR also maintains theCryptology ePrint Archive, an online repository of cryptologic research papers aimed at providing rapid dissemination of results.[3]
Asiacrypt (also ASIACRYPT) is an international conference for cryptography research. The full name of the conference is currently International Conference on the Theory and Application of Cryptology and Information Security, though this has varied over time. Asiacrypt is a conference sponsored by the IACR since 2000, and is one of its three flagship conferences. Asiacrypt is now held annually in November or December at various locations throughoutAsiaandAustralia.
Initially, the Asiacrypt conferences were called AUSCRYPT, as the first one was held inSydney, Australia in 1990, and only later did the community decide that the conference should be held in locations throughout Asia. The first conference to be called "Asiacrypt" was held in 1991 inFujiyoshida,Japan.
Cryptographic Hardware and Embedded Systems (CHES) is a conference for cryptography research,[4]focusing on the implementation of cryptographic algorithms. The two general areas treated are the efficient and the secure implementation of algorithms. Related topics such as random number generators,physical unclonable functionor special-purpose cryptanalytical machines are also commonly covered at the workshop. It was first held inWorcester, Massachusettsin 1999 atWorcester Polytechnic Institute(WPI). It was founded by Çetin Kaya Koç and Christof Paar. CHES 2000 was also held at WPI; after that, the conference has been held at various locations worldwide. After the two CHES' at WPI, the locations in the first ten years were, in chronological order,Paris,San Francisco,Cologne,Boston,Edinburgh,Yokohama,Vienna,Washington, D.C., andLausanne. Since 2009, CHES rotates between the three continents Europe, North America and Asia.[5]The attendance record was set by CHES 2018 in Amsterdam with about 600 participants.
Eurocrypt (or EUROCRYPT) is a conference for cryptography research. The full name of the conference is now the Annual International Conference on the Theory and Applications of Cryptographic Techniques. Eurocrypt is one of the IACR flagship conferences, along with CRYPTO and ASIACRYPT.
Eurocrypt is held annually in the spring in various locations throughout Europe. The first workshop in the series of conferences that became known as Eurocrypt was held in 1982. In 1984, the name "Eurocrypt" was first used. Generally, there have been published proceedings including all papers at the conference every year, with two exceptions; in 1983, no proceedings was produced, and in 1986, the proceedings contained only abstracts.Springerhas published all the official proceedings, first as part of Advances in Cryptology in theLecture Notes in Computer Scienceseries.
Fast Software Encryption, often abbreviated FSE, is a workshop for cryptography research, focused onsymmetric-key cryptographywith an emphasis on fast, practical techniques, as opposed to theory. Though "encryption" is part of the conference title, it is not limited to encryption research; research on other symmetric techniques such asmessage authentication codesandhash functionsis often presented there. FSE has been an IACR workshop since 2002, though the first FSE workshop was held in 1993. FSE is held annually in various locations worldwide, mostly in Europe. The dates of the workshop have varied over the years, but recently, it has been held in February.
PKC or Public-Key Cryptography is the short name of the International Workshop on Theory and Practice in Public Key Cryptography (modified as International Conference on Theory and Practice in Public Key Cryptography since 2006).
The Theory of Cryptography Conference, often abbreviated TCC, is an annual conference for theoretical cryptography research.[6]It was first held in 2004 atMIT, and was also held at MIT in 2005, both times in February. TCC became an IACR-sponsored workshop in 2006. The founding steering committee consists of Mihir Bellare, Ivan Damgard, Oded Goldreich, Shafi Goldwasser, Johan Hastad, Russell Impagliazzo, Ueli Maurer, Silvio Micali, Moni Naor, and Tatsuaki Okamoto.
The importance of the theoretical study of Cryptography is widely recognized by now. This area has contributed much to the practice of cryptography and secure systems as well as to the theory of computation at large.
The needs of the theoretical cryptography (TC) community are best understood in relation to the two communities between which it resides: the Theory of Computation (TOC) community and the Cryptography/Security community. All three communities have grown in volume in recent years. This increase in volume makes the hosting of TC by the existing TOC and Crypto conferences quite problematic. Furthermore, the perspectives of TOC and Crypto on TC do not necessarily fit the internal perspective of TC and the interests of TC. All these indicate a value in the establishment of an independent specialized conference. A dedicated conference not only provides opportunities for research dissemination and interaction, but helps shape the field, give it a recognizable identity, and communicate its message.
The Real World Crypto Symposium is a conference for applied cryptography research, which was started in 2012 byKenny PatersonandNigel Smart. The winner of theLevchin Prizeis announced at RWC.[7][8]Announcements made at the symposium include the first knownchosen prefix attackon SHA-1[9][10]and the inclusion ofend-to-end encryptioninFacebook Messenger.[11]Also, the introduction of the E4 chip took place at RWC.[12]Flaws in messaging apps such asWhatsAppwere also presented there.[13]
CRYPTO, the International Cryptology Conference, is an academic conference on all aspects of cryptography andcryptanalysis. It is held yearly in August inSanta Barbara,Californiaat theUniversity of California, Santa Barbara.[14]
The first CRYPTO was held in 1981.[15]It was the first major conference on cryptology and was all the more important because relations between government, industry and academia were rather tense. Encryption was considered a very sensitive subject and the coming together of delegates from different countries was unheard-of at the time. The initiative for the formation of the IACR came during CRYPTO '82, and CRYPTO '83 was the first IACR sponsored conference.
The IACR Fellows Program (FIACR) has been established as an honor to bestow upon its exceptional members. There are currently 104 IACR Fellows.[16]
|
https://en.wikipedia.org/wiki/International_Association_for_Cryptologic_Research
|
Incomputingandcomputer programming,exception handlingis the process of responding to the occurrence ofexceptions– anomalous or exceptional conditions requiring special processing – during theexecutionof aprogram. In general, an exception breaks the normal flow of execution and executes a pre-registeredexception handler; the details of how this is done depend on whether it is ahardwareorsoftwareexception and how the software exception is implemented.
Exceptions are defined by different layers of a computer system, and the typical layers are CPU-definedinterrupts,operating system(OS)-definedsignals,programming language-defined exceptions. Each layer requires different ways of exception handling although they may be interrelated, e.g. a CPU interrupt could be turned into an OS signal. Some exceptions, especially hardware ones, may be handled so gracefully that execution can resume where it was interrupted.
The definition of an exception is based on the observation that eachprocedurehas aprecondition, a set of circumstances for which it will terminate "normally".[1]An exception handling mechanism allows the procedure toraise an exception[2]if this precondition is violated,[1]for example if the procedure has been called on an abnormal set of arguments. The exception handling mechanism thenhandlesthe exception.[3]
The precondition, and the definition of exception, issubjective. The set of "normal" circumstances is defined entirely by the programmer, e.g. the programmer may deem division by zero to be undefined, hence an exception, or devise some behavior such as returning zero or a special "ZERO DIVIDE" value (circumventing the need for exceptions).[4]Common exceptions include an invalid argument (e.g. value is outside of thedomain of a function),[5]an unavailable resource (like a missing file,[6]a network drive error,[7]or out-of-memory errors[8]), or that the routine has detected a normal condition that requires special handling, e.g., attention, end of file.[9]Social pressure is a major influence on the scope of exceptions and use of exception-handling mechanisms, i.e. "examples of use, typically found in core libraries, and code examples in technical books, magazine articles, and online discussion forums, and in an organization’s code standards".[10]
Exception handling solves thesemipredicate problem, in that the mechanism distinguishes normal return values from erroneous ones. In languages without built-in exception handling such as C, routines would need to signal the error in some other way, such as the commonreturn codeanderrnopattern.[11]Taking a broad view, errors can be considered to be a proper subset of exceptions,[12]and explicit error mechanisms such as errno can be considered (verbose) forms of exception handling.[11]The term "exception" is preferred to "error" because it does not imply that anything is wrong - a condition viewed as an error by one procedure or programmer may not be viewed that way by another.[13]
The term "exception" may be misleading because its connotation of "anomaly" indicates that raising an exception is abnormal or unusual,[14]when in fact raising the exception may be a normal and usual situation in the program.[13]For example, suppose a lookup function for anassociative arraythrows an exception if the key has no value associated. Depending on context, this "key absent" exception may occur much more often than a successful lookup.[15]
The first hardware exception handling was found in theUNIVAC Ifrom 1951.
Arithmetic overflow executed two instructions at address 0 which could transfer control or fix up the result.[16]Software exception handling developed in the 1960s and 1970s. Exception handling was subsequently widely adopted by many programming languages from the 1980s onward.
There is no clear consensus as to the exact meaning of an exception with respect to hardware.[17]From the implementation point of view, it is handled identically to aninterrupt: the processor halts execution of the current program, looks up theinterrupt handlerin theinterrupt vector tablefor that exception or interrupt condition, saves state, and switches control.
Exception handling in theIEEE 754floating-pointstandard refers in general to exceptional conditions and defines an exception as "an event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application. That operation might signal one or more exceptions by invoking the default or, if explicitly requested, a language-defined alternate handling."
By default, an IEEE 754 exception is resumable and is handled by substituting a predefined value for different exceptions, e.g. infinity for a divide by zero exception, and providingstatus flagsfor later checking of whether the exception occurred (seeC99 programming languagefor a typical example of handling of IEEE 754 exceptions). An exception-handling style enabled by the use of status flags involves: first computing an expression using a fast, direct implementation; checking whether it failed by testing status flags; and then, if necessary, calling a slower, more numerically robust, implementation.[18]
The IEEE 754 standard uses the term "trapping" to refer to the calling of a user-supplied exception-handling routine on exceptional conditions, and is an optional feature of the standard. The standard recommends several usage scenarios for this, including the implementation of non-default pre-substitution of a value followed by resumption, to concisely handleremovable singularities.[18][19][20]
The default IEEE 754 exception handling behaviour of resumption following pre-substitution of a default value avoids the risks inherent in changing flow of program control on numerical exceptions. For example, the 1996Cluster spacecraftlaunch ended in a catastrophic explosion due in part to theAdaexception handling policy of aborting computation on arithmetic error.William Kahanclaims the default IEEE 754 exception handling behavior would have prevented this.[19]
Front-end web developmentframeworks, such asReactandVue, have introduced error handling mechanisms where errors propagate up theuser interface(UI) component hierarchy, in a way that is analogous to how errors propagate up the call stack in executing code.[21][22]Here the error boundary mechanism serves as an analogue to the typical try-catch mechanism. Thus a component can ensure that errors from its child components are caught and handled, and not propagated up to parent components.
For example, in Vue, a component would catch errors by implementingerrorCaptured
When used like this in markup:
The error produced by the child component is caught and handled by the parent component.[23]
|
https://en.wikipedia.org/wiki/Exception_handling
|
Negation as failure(NAF, for short) is anon-monotonicinference rule inlogic programming, used to derivenotp{\displaystyle \mathrm {not} ~p}(i.e. thatp{\displaystyle p}is assumed not to hold) from failure to derivep{\displaystyle p}. Note thatnotp{\displaystyle \mathrm {not} ~p}can be different from the statement¬p{\displaystyle \neg p}of thelogical negationofp{\displaystyle p}, depending on thecompletenessof the inference algorithm and thus also on the formal logic system.
Negation as failure has been an important feature of logic programming since the earliest days of bothPlannerandProlog. In Prolog, it is usually implemented using Prolog's extralogical constructs.
More generally, this kind of negation is known asweak negation,[1][2]in contrast with the strong (i.e. explicit, provable) negation.
In Planner, negation as failure could be implemented as follows:
which says that if an exhaustive search to provepfails, then assert¬p.[3]This states that propositionpshall be assumed as "not true" in any subsequent processing. However, Planner not being based on a logical model, a logical interpretation of the preceding remains obscure.
In pure Prolog, NAF literals of the formnotp{\displaystyle \mathrm {not} ~p}can occur in the body of clauses and can be used to derive other NAF literals. For example, given only the four clauses
NAF derivesnots{\displaystyle \mathrm {not} ~s},notr{\displaystyle \mathrm {not} ~r}andp{\displaystyle p}as well ast{\displaystyle t}andq{\displaystyle q}.
The semantics of NAF remained an open issue until 1978, whenKeith Clarkshowed that it is correct with respect to the completion of the logic program, where, loosely speaking, "only" and←{\displaystyle \leftarrow }are interpreted as "if and only if", written as "iff" or "≡{\displaystyle \equiv }".
For example, the completion of the four clauses above is
The NAF inference rule simulates reasoning explicitly with the completion, where both sides of the equivalence are negated and negation on the right-hand side is distributed down toatomic formulae. For example, to shownotp{\displaystyle \mathrm {not} ~p}, NAF simulates reasoning with the equivalences
In the non-propositional case, the completion needs to be augmented with equality axioms, to formalize the assumption that individuals with distinct names are distinct. NAF simulates this by failure of unification. For example, given only the two clauses
NAF derivesnotp(c){\displaystyle \mathrm {not} ~p(c)}.
The completion of the program is
augmented with unique names axioms and domain closure axioms.
The completion semantics is closely related both tocircumscriptionand to theclosed world assumption.
The completion semantics justifies interpreting the resultnotp{\displaystyle \mathrm {not} ~p}of a NAF inference as the classical negation¬p{\displaystyle \neg p}ofp{\displaystyle p}. However, in 1987,Michael Gelfondshowed that it is also possible to interpretnotp{\displaystyle \mathrm {not} ~p}literally as "p{\displaystyle p}can not be shown", "p{\displaystyle p}is not known" or "p{\displaystyle p}is not believed", as inautoepistemic logic. The autoepistemic interpretation was developed further by Gelfond andLifschitzin 1988, and is the basis ofanswer set programming.
The autoepistemic semantics of a pure Prolog program P with NAF literals is obtained by "expanding" P with a set of ground (variable-free) NAF literals Δ that isstablein the sense that
In other words, a set of assumptions Δ about what can not be shown isstableif and only if Δ is the set of all sentences that truly can not be shown from the program P expanded by Δ. Here, because of the simple syntax of pure Prolog programs, "implied by" can be understood very simply as derivability using modus ponens and universal instantiation alone.
A program can have zero, one or more stable expansions. For example,
has no stable expansions.
has exactly one stable expansionΔ = {notq}
has exactly two stable expansionsΔ1= {notp}andΔ2= {notq}.
The autoepistemic interpretation of NAF can be combined with classical negation, as in extended logic programming andanswer set programming. Combining the two negations, it is possible to express, for example
|
https://en.wikipedia.org/wiki/Negation_as_failure
|
Fault toleranceis the ability of asystemto maintain proper operation despite failures or faults in one or more of its components. This capability is essential forhigh-availability,mission-critical, or evenlife-critical systems.
Fault tolerance specifically refers to a system's capability to handle faults without any degradation or downtime. In the event of an error, end-users remain unaware of any issues. Conversely, a system that experiences errors with some interruption in service or graceful degradation of performance is termed 'resilient'. In resilience, the system adapts to the error, maintaining service but acknowledging a certain impact on performance.
Typically, fault tolerance describescomputer systems, ensuring the overall system remains functional despitehardwareorsoftwareissues. Non-computing examples include structures that retain their integrity despite damage fromfatigue,corrosionor impact.
The first known fault-tolerant computer wasSAPO, built in 1951 inCzechoslovakiabyAntonín Svoboda.[1]: 155Its basic design wasmagnetic drumsconnected via relays, with a voting method ofmemory errordetection (triple modular redundancy). Several other machines were developed along this line, mostly for military use. Eventually, they separated into three distinct categories:
Most of the development in the so-called LLNM (Long Life, No Maintenance) computing was done by NASA during the 1960s,[2]in preparation forProject Apolloand other research aspects. NASA's first machine went into aspace observatory, and their second attempt, the JSTAR computer, was used inVoyager. This computer had a backup of memory arrays to use memory recovery methods and thus it was called the JPL Self-Testing-And-Repairing computer. It could detect its own errors and fix them or bring up redundant modules as needed. The computer is still working, as of early 2022.[3]
Hyper-dependable computers were pioneered mostly byaircraftmanufacturers,[1]: 210nuclear powercompanies, and therailroad industryin the United States. These entities needed computers with massive amounts of uptime that wouldfail gracefullyenough during a fault to allow continued operation, while relying on constant human monitoring of computer output to detect faults. Again, IBM developed the first computer of this kind for NASA for guidance ofSaturn Vrockets, but later onBNSF,Unisys, andGeneral Electricbuilt their own.[1]: 223
In the 1970s, much work happened in the field.[4][5][6]For instance,F14 CADChadbuilt-in self-testand redundancy.[7]
In general, the early efforts at fault-tolerant designs were focused mainly on internal diagnosis, where a fault would indicate something was failing and a worker could replace it. SAPO, for instance, had a method by which faulty memory drums would emit a noise before failure.[8]Later efforts showed that to be fully effective, the system had to be self-repairing and diagnosing – isolating a fault and then implementing a redundant backup while alerting a need for repair. This is known as N-model redundancy, where faults cause automatic fail-safes and a warning to the operator, and it is still the most common form of level one fault-tolerant design in use today.
Voting was another initial method, as discussed above, with multiple redundant backups operating constantly and checking each other's results. For example, if four components reported an answer of 5 and one component reported an answer of 6, the other four would "vote" that the fifth component was faulty and have it taken out of service. This is called M out of N majority voting.
Historically, the trend has been to move away from N-model and toward M out of N, as the complexity of systems and the difficulty of ensuring the transitive state from fault-negative to fault-positive did not disrupt operations.
Tandem Computers, in 1976[9]andStratuswere among the first companies specializing in the design of fault-tolerant computer systems foronline transaction processing.
Hardware fault tolerance sometimes requires that broken parts be taken out and replaced with new parts while the system is still operational (in computing known ashot swapping). Such a system implemented with a single backup is known assingle point tolerantand represents the vast majority of fault-tolerant systems. In such systems themean time between failuresshould be long enough for the operators to have sufficient time to fix the broken devices (mean time to repair) before the backup also fails. It is helpful if the time between failures is as long as possible, but this is not specifically required in a fault-tolerant system.
Fault tolerance is notably successful in computer applications.Tandem Computersbuilt their entire business on such machines, which used single-point tolerance to create theirNonStopsystems withuptimesmeasured in years.
Fail-safearchitectures may encompass also the computer software, for example by processreplication.
Data formats may also be designed to degrade gracefully.HTMLfor example, is designed to beforward compatible, allowingWeb browsersto ignore new and unsupported HTML entities without causing the document to be unusable. Additionally, some sites, including popular platforms such as Twitter (until December 2020), provide an optional lightweight front end that does not rely onJavaScriptand has aminimallayout, to ensure wideaccessibilityandoutreach, such as ongame consoleswith limited web browsing capabilities.[10][11]
A highly fault-tolerant system might continue at the same level of performance even though one or more components have failed. For example, a building with a backup electrical generator will provide the same voltage to wall outlets even if the grid power fails.
A system that is designed tofail safe, or fail-secure, orfail gracefully, whether it functions at a reduced level or fails completely, does so in a way that protects people, property, or data from injury, damage, intrusion, or disclosure. In computers, a program might fail-safe by executing agraceful exit(as opposed to an uncontrolled crash) to prevent data corruption after an error occurs.[12]A similar distinction is made between "failing well" and "failing badly".
A system designed to experiencegraceful degradation, or tofail soft(used in computing, similar to "fail safe"[13]) operates at a reduced level of performance after some component fails. For example, if grid power fails, a building may operate lighting at reduced levels or elevators at reduced speeds. In computing, if insufficient network bandwidth is available to stream an online video, a lower-resolution version might be streamed in place of the high-resolution version.Progressive enhancementis another example, where web pages are available in a basic functional format for older, small-screen, or limited-capability web browsers, but in an enhanced version for browsers capable of handling additional technologies or that have a larger display.
In fault-tolerant computer systems, programs that are consideredrobustare designed to continue operation despite an error, exception, or invalid input, instead of crashing completely.Software brittlenessis the opposite of robustness.Resilient networkscontinue to transmit data despite the failure of some links or nodes.Resilient buildings and infrastructureare likewise expected to prevent complete failure in situations like earthquakes, floods, or collisions.
A system with highfailure transparencywill alert users that a component failure has occurred, even if it continues to operate with full performance, so that failure can be repaired or imminent complete failure anticipated.[14]Likewise, afail-fastcomponent is designed to report at the first point of failure, rather than generating reports when downstream components fail. This allows easier diagnosis of the underlying problem, and may prevent improper operation in a broken state.
Asingle fault conditionis a situation where one means forprotectionagainst ahazardis defective. If a single fault condition results unavoidably in another single fault condition, the two failures are considered one single fault condition.[15]A source offers the following example:
Asingle-fault conditionis a condition when a single means for protection against hazard in equipment is defective or a single external abnormal condition is present, e.g. short circuit between the live parts and the applied part.[16]
Providing fault-tolerant design for every component is normally not an option. Associated redundancy brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to design, verify, and test. Therefore, a number of choices have to be examined to determine which components should be fault tolerant:[17]
An example of a component that passes all the tests is a car's occupant restraint system. While theprimaryoccupant restraint system is not normally thought of, it isgravity. If the vehicle rolls over or undergoes severe g-forces, then this primary method of occupant restraint may fail. Restraining the occupants during such an accident is absolutely critical to safety, so the first test is passed. Accidents causing occupant ejection were quite common beforeseat belts, so the second test is passed. The cost of a redundant restraint method like seat belts is quite low, both economically and in terms of weight and space, so the third test is passed. Therefore, adding seat belts to all vehicles is an excellent idea. Other "supplemental restraint systems", such asairbags, are more expensive and so pass that test by a smaller margin.
Another excellent and long-term example of this principle being put into practice is the braking system: whilst the actual brake mechanisms are critical, they are not particularly prone to sudden (rather than progressive) failure, and are in any case necessarily duplicated to allow even and balanced application of brake force to all wheels. It would also be prohibitively costly to further double-up the main components and they would add considerable weight. However, the similarly critical systems for actuating the brakes under driver control are inherently less robust, generally using a cable (can rust, stretch, jam, snap) or hydraulic fluid (can leak, boil and develop bubbles, absorb water and thus lose effectiveness). Thus in most modern cars the footbrake hydraulic brake circuit is diagonally divided to give two smaller points of failure, the loss of either only reducing brake power by 50% and not causing as much dangerous brakeforce imbalance as a straight front-back or left-right split, and should the hydraulic circuit fail completely (a relatively very rare occurrence), there is a failsafe in the form of the cable-actuated parking brake that operates the otherwise relatively weak rear brakes, but can still bring the vehicle to a safe halt in conjunction with transmission/engine braking so long as the demands on it are in line with normal traffic flow. The cumulatively unlikely combination of total foot brake failure with the need for harsh braking in an emergency will likely result in a collision, but still one at lower speed than would otherwise have been the case.
In comparison with the foot pedal activated service brake, the parking brake itself is a less critical item, and unless it is being used as a one-time backup for the footbrake, will not cause immediate danger if it is found to be nonfunctional at the moment of application. Therefore, no redundancy is built into it per se (and it typically uses a cheaper, lighter, but less hardwearing cable actuation system), and it can suffice, if this happens on a hill, to use the footbrake to momentarily hold the vehicle still, before driving off to find a flat piece of road on which to stop. Alternatively, on shallow gradients, the transmission can be shifted into Park, Reverse or First gear, and the transmission lock / engine compression used to hold it stationary, as there is no need for them to include the sophistication to first bring it to a halt.
On motorcycles, a similar level of fail-safety is provided by simpler methods; first, the front and rear brake systems are entirely separate, regardless of their method of activation (that can be cable, rod or hydraulic), allowing one to fail entirely while leaving the other unaffected. Second, the rear brake is relatively strong compared to its automotive cousin, being a powerful disc on some sports models, even though the usual intent is for the front system to provide the vast majority of braking force; as the overall vehicle weight is more central, the rear tire is generally larger and has better traction, so that the rider can lean back to put more weight on it, therefore allowing more brake force to be applied before the wheel locks. On cheaper, slower utility-class machines, even if the front wheel should use a hydraulic disc for extra brake force and easier packaging, the rear will usually be a primitive, somewhat inefficient, but exceptionally robust rod-actuated drum, thanks to the ease of connecting the footpedal to the wheel in this way and, more importantly, the near impossibility of catastrophic failure even if the rest of the machine, like a lot of low-priced bikes after their first few years of use, is on the point of collapse from neglected maintenance.
The basic characteristics of fault tolerance require:
In addition, fault-tolerant systems are characterized in terms of both planned service outages and unplanned service outages. These are usually measured at the application level and not just at a hardware level. The figure of merit is calledavailabilityand is expressed as a percentage. For example, afive ninessystem would statistically provide 99.999% availability.
Fault-tolerant systems are typically based on the concept of redundancy.
Research into the kinds of tolerances needed for critical systems involves a large amount of interdisciplinary work. The more complex the system, the more carefully all possible interactions have to be considered and prepared for. Considering the importance of high-value systems in transport,public utilitiesand the military, the field of topics that touch on research is very wide: it can include such obvious subjects assoftware modelingand reliability, orhardware design, to arcane elements such asstochasticmodels,graph theory, formal or exclusionary logic,parallel processing, remotedata transmission, and more.[18]
Spare components address the first fundamental characteristic of fault tolerance in three ways:
All implementations ofRAID,redundant array of independent disks, except RAID 0, are examples of a fault-tolerantstorage devicethat usesdata redundancy.
Alockstepfault-tolerant machine uses replicated elements operating in parallel. At any time, all the replications of each element should be in the same state. The same inputs are provided to eachreplication, and the same outputs are expected. The outputs of the replications are compared using a voting circuit. A machine with two replications of each element is termeddual modular redundant(DMR). The voting circuit can then only detect a mismatch and recovery relies on other methods. A machine with three replications of each element is termedtriple modular redundant(TMR). The voting circuit can determine which replication is in error when a two-to-one vote is observed. In this case, the voting circuit can output the correct result, and discard the erroneous version. After this, the internal state of the erroneous replication is assumed to be different from that of the other two, and the voting circuit can switch to a DMR mode. This model can be applied to any larger number of replications.
Lockstepfault-tolerant machines are most easily made fullysynchronous, with each gate of each replication making the same state transition on the same edge of the clock, and the clocks to the replications being exactly in phase. However, it is possible to build lockstep systems without this requirement.
Bringing the replications into synchrony requires making their internal stored states the same. They can be started from a fixed initial state, such as the reset state. Alternatively, the internal state of one replica can be copied to another replica.
One variant of DMR ispair-and-spare. Two replicated elements operate in lockstep as a pair, with a voting circuit that detects any mismatch between their operations and outputs a signal indicating that there is an error. Another pair operates exactly the same way. A final circuit selects the output of the pair that does not proclaim that it is in error. Pair-and-spare requires four replicas rather than the three of TMR, but has been used commercially.
Failure-oblivious computingis a technique that enablescomputer programsto continue executing despiteerrors.[19]The technique can be applied in different contexts. It can handle invalid memory reads by returning a manufactured value to the program,[20]which in turn, makes use of the manufactured value and ignores the formermemoryvalue it tried to access, this is a great contrast totypical memory checkers, which inform the program of the error or abort the program.
The approach has performance costs: because the technique rewrites code to insert dynamic checks for address validity, execution time will increase by 80% to 500%.[21]
Recovery shepherding is a lightweight technique to enable software programs to recover from otherwise fatal errors such as null pointer dereference and divide by zero.[22]Comparing to the failure oblivious computing technique, recovery shepherding works on the compiled program binary directly and does not need to recompile to program.
It uses thejust-in-timebinary instrumentationframeworkPin. It attaches to the application process when an error occurs, repairs the execution,
tracks the repair effects as the execution continues, contains the repair effects within the application process, and detaches from the process after all repair effects are flushed from the process state. It does not interfere with the normal execution of the program and therefore incurs negligible overhead.[22]For 17 of 18 systematically collected real world null-dereference and divide-by-zero errors, a prototype implementation enables the application to continue to execute to provide acceptable output and service to its users on the error-triggering inputs.[22]
Thecircuit breaker design patternis a technique to avoid catastrophic failures in distributed systems.
Redundancy is the provision of functional capabilities that would be unnecessary in a fault-free environment.[23]This can consist of backup components that automatically "kick in" if one component fails. For example, large cargo trucks can lose a tire without any major consequences. They have many tires, and no one tire is critical (with the exception of the front tires, which are used to steer, but generally carry less load, each and in total, than the other four to 16, so are less likely to fail).
The idea of incorporating redundancy in order to improve the reliability of a system was pioneered byJohn von Neumannin the 1950s.[24]
Two kinds of redundancy are possible:[25]space redundancy and time redundancy. Space redundancy provides additional components, functions, or data items that are unnecessary for fault-free operation. Space redundancy is further classified into hardware, software and information redundancy, depending on the type of redundant resources added to the system. In time redundancy the computation or data transmission is repeated and the result is compared to a stored copy of the previous result. The current terminology for this kind of testing is referred to as 'In Service Fault Tolerance Testing or ISFTT for short.
Fault-tolerant design's advantages are obvious, while many of its disadvantages are not:
There is a difference between fault tolerance and systems that rarely have problems. For instance, theWestern Electriccrossbarsystems had failure rates of two hours per forty years, and therefore were highlyfault resistant. But when a fault did occur they still stopped operating completely, and therefore were notfault tolerant.
|
https://en.wikipedia.org/wiki/Fault_tolerance
|
Internet of things(IoT) describes devices withsensors, processing ability,softwareand othertechnologiesthat connect and exchange data with other devices and systems over theInternetor other communication networks.[1][2][3][4][5]The IoT encompasseselectronics,communication, andcomputer scienceengineering. "Internet of things" has been considered amisnomerbecause devices do not need to be connected to the publicinternet; they only need to be connected to a network[6]and be individually addressable.[7][8]
The field has evolved due to the convergence of multipletechnologies, includingubiquitous computing,commoditysensors, and increasingly powerfulembedded systems, as well asmachine learning.[9]Older fields ofembedded systems,wireless sensor networks, control systems,automation(includinghomeandbuilding automation), independently and collectively enable the Internet of things.[10]In the consumer market, IoT technology is mostsynonymouswith "smart home" products, including devices andappliances(lighting fixtures,thermostats, homesecurity systems,cameras, and other home appliances) that support one or more common ecosystems and can be controlled via devices associated with that ecosystem, such assmartphonesandsmart speakers. IoT is also used inhealthcare systems.[11]
There are a number of concerns about the risks in the growth of IoT technologies and products, especially in the areas ofprivacyandsecurity, and consequently there have been industry and government moves to address these concerns, including the development of international and local standards, guidelines, and regulatory frameworks.[12]Because of their interconnected nature, IoT devices are vulnerable to security breaches and privacy concerns. At the same time, the way these devices communicate wirelessly creates regulatory ambiguities, complicating jurisdictional boundaries of the data transfer.[13]
Around 1972, for its remote site use,Stanford Artificial Intelligence Laboratorydeveloped a computer controlled vending machine, adapted from a machine rented fromCanteen Vending, which sold for cash or, though a computer terminal (Teletype Model 33 KSR),[14]on credit.[15]Products included, at least, beer, yogurt, and milk.[15][14]It was called thePrancing Pony, after the name of the room, named after an inn in Tolkien'sLord of the Rings,[15][16]as each room atStanford Artificial Intelligence Laboratorywas named after a place inMiddle Earth.[17]A successor version still operates in the Computer Science Department atStanford, with both hardware and software having been updated.[15]
In 1982,[18]an early concept of a network connectedsmart devicewas built as an Internet interface for sensors installed in theCarnegie Mellon UniversityComputer Science Department's departmentalCoca-Colavending machine, supplied by graduate student volunteers, provided a temperature model and an inventory status,[19][20]inspired by the computer controlled vending machine in thePrancing Ponyroom atStanford Artificial Intelligence Laboratory.[21]First accessible only on the CMU campus, it became the firstARPANET-connected appliance,[22][23]
Mark Weiser's 1991 paper onubiquitous computing, "The Computer of the 21st Century", as well as academic venues such as UbiComp and PerCom produced the contemporary vision of the IoT.[24][25]In 1994, Reza Raji described the concept inIEEE Spectrumas "[moving] small packets of data to a large set of nodes, so as to integrate and automate everything from home appliances to entire factories".[26]Between 1993 and 1997, several companies proposed solutions likeMicrosoft'sat WorkorNovell'sNEST. The field gained momentum whenBill Joyenvisioneddevice-to-devicecommunication as a part of his "Six Webs" framework, presented at the World Economic Forum at Davos in 1999.[27]
The concept of the "Internet of things" and the term itself, first appeared in a speech by Peter T. Lewis, to the Congressional Black Caucus Foundation15th AnnualLegislative Weekend inWashington, D.C., published in September 1985. According to Lewis, "The Internet of Things, or IoT, is the integration of people, processes andtechnologywith connectable devices and sensors to enable remote monitoring, status, manipulation and evaluation of trends of such devices."[28]
The term "Internet of things" was coined independently byKevin AshtonofProcter & Gamble, later ofMIT'sAuto-ID Center, in 1999,[29]though he prefers the phrase "Internetforthings".[30]At that point, he viewedradio-frequency identification(RFID) as essential to the Internet of things,[31]which would allowcomputersto manage all individual things.[32][33][34]The main theme of the Internet of things is to embed short-range mobile transceivers in various gadgets and daily necessities to enable new forms of communication between people and things, and between things themselves.[35]
In 2004 Cornelius "Pete" Peterson, CEO of NetSilicon, predicted that, "The next era of information technology will be dominated by [IoT] devices, and networked devices will ultimately gain in popularity and significance to the extent that they will far exceed the number of networked computers and workstations." Peterson believed that medical devices and industrial controls would become dominant applications of the technology.[36]
Defining the Internet of things as "simply the point in time when more 'things or objects' were connected to the Internet than people",Cisco Systemsestimated that the IoT was "born" between 2008 and 2009, with the things/people ratio growing from 0.08 in 2003 to 1.84 in 2010.[37]
The extensive set of applications for IoT devices[38]is often divided into consumer, commercial, industrial, and infrastructure spaces.[39][40]
A growing portion of IoT devices is created for consumer use, including connected vehicles,home automation,wearable technology, connected health, and appliances with remote monitoring capabilities.[41]
IoT devices are a part of the larger concept ofhome automation, which can include lighting, heating and air conditioning, media and security systems and camera systems.[42][43]Long-term benefits could include energy savings by automatically ensuring lights and electronics are turned off or by making the residents in the home aware of usage.[44]
A smart home or automated home could be based on a platform or hubs that control smart devices and appliances.[45]For instance, usingApple'sHomeKit, manufacturers can have their home products and accessories controlled by an application iniOSdevices such as theiPhoneand theApple Watch.[46][47]This could be a dedicated app or iOS native applications such asSiri.[48]This can be demonstrated in the case of Lenovo's Smart Home Essentials, which is a line of smart home devices that are controlled through Apple's Home app or Siri without the need for a Wi-Fi bridge.[48]There are also dedicated smart home hubs that are offered as standalone platforms to connect different smart home products. These include theAmazon Echo,Google Home, Apple'sHomePod, and Samsung'sSmartThings Hub.[49]In addition to the commercial systems, there are many non-proprietary, open source ecosystems, including Home Assistant, OpenHAB and Domoticz.[50]
One key application of a smart home is toassist the elderly and disabled. These home systems use assistive technology to accommodate an owner's specific disabilities.[51]Voice controlcan assist users with sight and mobility limitations while alert systems can be connected directly tocochlear implantsworn by hearing-impaired users.[52]They can also be equipped with additional safety features, including sensors that monitor for medical emergencies such as falls orseizures.[53]Smart home technology applied in this way can provide users with more freedom and a higher quality of life.[51]
The term "Enterprise IoT" refers to devices used in business and corporate settings.
TheInternet of Medical Things(IoMT) is an application of the IoT for medical and health-related purposes, data collection and analysis for research, and monitoring.[54][55][56][57][58]The IoMT has been referenced as "Smart Healthcare",[59]as the technology for creating a digitized healthcare system, connecting available medical resources and healthcare services.[60][61]
IoT devices can be used to enableremote health monitoringandemergency notification systems. These health monitoring devices can range from blood pressure and heart rate monitors to advanced devices capable of monitoring specialized implants, such as pacemakers, Fitbit electronic wristbands, or advanced hearing aids.[62]Some hospitals have begun implementing "smart beds" that can detect when they are occupied and when a patient is attempting to get up. It can also adjust itself to ensure appropriate pressure and support are applied to the patient without the manual interaction of nurses.[54]A 2015 Goldman Sachs report indicated that healthcare IoT devices "can save the United States more than $300 billion in annual healthcare expenditures by increasing revenue and decreasing cost."[63]Moreover, the use of mobile devices to support medical follow-up led to the creation of 'm-health', used analyzed health statistics.[64]
Specialized sensors can also be equipped within living spaces to monitor the health and general well-being of senior citizens, while also ensuring that proper treatment is being administered and assisting people to regain lost mobility via therapy as well.[65]These sensors create a network ofintelligent sensorsthat are able to collect, process, transfer, and analyze valuable information in different environments, such as connecting in-home monitoring devices to hospital-based systems.[59]Other consumer devices to encourage healthy living, such as connected scales orwearable heart monitors, are also a possibility with the IoT.[66]End-to-end health monitoring IoT platforms are also available for antenatal and chronic patients, helping one manage health vitals and recurring medication requirements.[67]
Advances in plastic and fabric electronics fabrication methods have enabled ultra-low cost, use-and-throw IoMT sensors. These sensors, along with the requiredRFIDelectronics, can be fabricated onpaperore-textilesfor wireless powered disposable sensing devices.[68]Applications have been established forpoint-of-care medical diagnostics, where portability and low system-complexity is essential.[69]
As of 2018[update]IoMT was being applied in theclinical laboratoryindustry.[56]
IoMT in the insurance industry provides access to better and new types of dynamic information. This includes sensor-based solutions such as biosensors, wearables, connected health devices, and mobile apps to track customer behavior. This can lead to more accurate underwriting and new pricing models.[70]
The application of the IoT in healthcare plays a fundamental role in managingchronic diseasesand in disease prevention and control. Remote monitoring is made possible through the connection of powerful wireless solutions. The connectivity enables health practitioners to capture patient's data and apply complex algorithms in health data analysis.[71]
The IoT can assist in the integration of communications, control, and information processing across varioustransportation systems. Application of the IoT extends to all aspects of transportation systems (i.e., the vehicle,[72]the infrastructure, and the driver or user). Dynamic interaction between these components of a transport system enables inter- and intra-vehicular communication,[73]smart traffic control, smart parking,electronic toll collection systems,logisticsandfleet management,vehicle control, safety, and road assistance.[62][74]
Invehicular communication systems,vehicle-to-everythingcommunication (V2X), consists of three main components: vehicle-to-vehicle communication (V2V), vehicle-to-infrastructure communication (V2I) and vehicle to pedestrian communications (V2P). V2X is the first step toautonomous drivingand connected road infrastructure.[75]
IoT devices can be used to monitor and control the mechanical, electrical and electronic systems used in various types of buildings (e.g., public and private, industrial, institutions, or residential)[62]inhome automationandbuilding automationsystems. In this context, three main areas are being covered in literature:[76]
Also known as IIoT, industrial IoT devices acquire and analyze data from connected equipment, operational technology (OT), locations, and people. Combined withoperational technology(OT) monitoring devices, IIoT helps regulate and monitor industrial systems.[77]Also, the same implementation can be carried out for automated record updates of asset placement in industrial storage units as the size of the assets can vary from a small screw to the whole motor spare part, and misplacement of such assets can cause a loss of manpower time and money.
The IoT can connect various manufacturing devices equipped with sensing, identification, processing, communication, actuation, and networking capabilities.[78]Network control and management ofmanufacturing equipment,assetand situation management, or manufacturingprocess controlallow IoT to be used for industrial applications and smart manufacturing.[79]IoT intelligent systems enable rapid manufacturing and optimization of new products and rapid response to product demands.[62]
Digital control systemsto automate process controls, operator tools and service information systems to optimize plant safety and security are within the purview of theIIoT.[80]IoT can also be applied to asset management viapredictive maintenance,statistical evaluation, and measurements to maximize reliability.[81]Industrial management systems can be integrated withsmart grids, enabling energy optimization. Measurements, automated controls, plant optimization, health and safety management, and other functions are provided by networked sensors.[62]
In addition to general manufacturing, IoT is also used for processes in the industrialization of construction.[82]
There are numerous IoT applications in farming[83]such as collecting data on temperature, rainfall, humidity, wind speed, pest infestation, and soil content. This data can be used to automate farming techniques, make informed decisions to improve quality and quantity, minimize risk and waste, and reduce the effort required to manage crops. For example, farmers can now monitor soil temperature and moisture from afar and even apply IoT-acquired data to precision fertilization programs.[84]The overall goal is that data from sensors, coupled with the farmer's knowledge and intuition about his or her farm, can help increase farm productivity, and also help reduce costs.
In August 2018,Toyota Tsushobegan a partnership withMicrosoftto createfish farmingtools using theMicrosoft Azureapplication suite for IoT technologies related to water management. Developed in part by researchers fromKindai University, the water pump mechanisms useartificial intelligenceto count the number of fish on aconveyor belt, analyze the number of fish, and deduce the effectiveness of water flow from the data the fish provide.[85]The FarmBeats project[86]from Microsoft Research that uses TV white space to connect farms is also a part of the Azure Marketplace now.[87]
IoT devices are in use to monitor the environments and systems of boats and yachts.[88]Many pleasure boats are left unattended for days in summer, and months in winter so such devices provide valuable early alerts of boat flooding, fire, and deep discharge of batteries. The use of global Internet data networks such asSigfox, combined with long-life batteries, and microelectronics allows the engine rooms, bilge, and batteries to be constantly monitored and reported to connected Android & Apple applications for example.
Monitoring and controlling operations of sustainable urban and rural infrastructures like bridges, railway tracks and on- and offshore wind farms is a key application of the IoT.[80]The IoT infrastructure can be used for monitoring any events or changes in structural conditions that can compromise safety and increase risk. The IoT can benefit the construction industry by cost-saving, time reduction, better quality workday, paperless workflow and increase in productivity. It can help in taking faster decisions and saving money in Real-TimeData Analytics. It can also be used for scheduling repair and maintenance activities efficiently, by coordinating tasks between different service providers and users of these facilities.[62]IoT devices can also be used to control critical infrastructure like bridges to provide access to ships. The usage of IoT devices for monitoring and operating infrastructure is likely to improve incident management and emergency response coordination, andquality of service,up-timesand reduce costs of operation in all infrastructure-related areas.[89]Even areas such as waste management can benefit.[90]
There are several planned or ongoing large-scale deployments of the IoT, to enable better management of cities and systems. For example,Songdo, South Korea, the first fully equipped and wiredsmart city, is gradually being built,[when?]with approximately 70 percent of the business district completed as of June 2018[update]. Much of the city, the first of its kind, is planned to be wired and automated to operate with little or no human intervention.[91]
In 2014 another application was undergoing a project inSantander, Spain. For this deployment, two approaches have been adopted. This city of 180,000 inhabitants has already seen 18,000 downloads of its city smartphone app. The app is connected to 10,000 sensors that enable services like parking search, and environmental monitoring. City context information is used in this deployment so as to benefit merchants through a spark deals mechanism based on city behavior that aims at maximizing the impact of each notification.[92]
Other examples of large-scale deployments underway include the Sino-Singapore Guangzhou Knowledge City;[93]work on improving air and water quality, reducing noise pollution, and increasing transportation efficiency in San Jose, California;[94]and smart traffic management in western Singapore.[95]Using its RPMA (Random Phase Multiple Access) technology, San Diego–basedIngenuhas built a nationwide public network[96]for low-bandwidthdata transmissions using the same unlicensed 2.4 gigahertz spectrum as Wi-Fi. Ingenu's "Machine Network" covers more than a third of the US population across 35 major cities including San Diego and Dallas.[97]French company,Sigfox, commenced building anUltra Narrowbandwireless data network in theSan Francisco Bay Areain 2014, the first business to achieve such a deployment in the U.S.[98][99]It subsequently announced it would set up a total of 4000base stationsto cover a total of 30 cities in the U.S. by the end of 2016, making it the largest IoT network coverage provider in the country thus far.[100][101]Cisco also participates in smart cities projects. Cisco has deployed technologies for Smart Wi-Fi, Smart Safety & Security,Smart Lighting, Smart Parking, Smart Transports, Smart Bus Stops, Smart Kiosks, Remote Expert for Government Services (REGS) and Smart Education in the five km area in the city of Vijaywada, India.[102][103]
Another example of a large deployment is the one completed by New York Waterways in New York City to connect all the city's vessels and be able to monitor them live 24/7. The network was designed and engineered byFluidmesh Networks, a Chicago-based company developing wireless networks for critical applications. The NYWW network is currently providing coverage on the Hudson River, East River, and Upper New York Bay. With the wireless network in place, NY Waterway is able to take control of its fleet and passengers in a way that was not previously possible. New applications can include security, energy and fleet management, digital signage, public Wi-Fi, paperless ticketing and others.[104]
Significant numbers of energy-consuming devices (e.g. lamps, household appliances, motors, pumps, etc.) already integrate Internet connectivity, which can allow them to communicate with utilities not only to balancepower generationbut also helps optimize the energy consumption as a whole.[62]These devices allow for remote control by users, or central management via acloud-based interface, and enable functions like scheduling (e.g., remotely powering on or off heating systems, controlling ovens, changing lighting conditions etc.).[62]Thesmart gridis a utility-side IoT application; systems gather and act on energy and power-related information to improve the efficiency of the production and distribution of electricity.[105]Usingadvanced metering infrastructure (AMI)Internet-connected devices, electric utilities not only collect data from end-users, but also manage distribution automation devices like transformers.[62]
Environmental monitoringapplications of the IoT typically use sensors to assist in environmental protection[106]by monitoringairorwater quality,[107]atmosphericorsoil conditions,[108]and can even include areas like monitoring themovements of wildlifeand theirhabitats.[109]Development of resource-constrained devices connected to the Internet also means that other applications likeearthquakeortsunami early-warning systemscan also be used by emergency services to provide more effective aid. IoT devices in this application typically span a large geographic area and can also be mobile.[62]It has been argued that the standardization that IoT brings to wireless sensing will revolutionize this area.[110]
Another example of integrating the IoT is Living Lab which integrates and combines research and innovation processes, establishing within a public-private-people-partnership.[111]Between 2006 and January 2024, there were over 440 Living Labs (though not all are currently active)[112]that use the IoT to collaborate and share knowledge between stakeholders to co-create innovative and technological products. For companies to implement and develop IoT services[113]for smart cities, they need to have incentives. The governments play key roles in smart city projects as changes in policies will help cities to implement the IoT which provides effectiveness, efficiency, and accuracy of the resources that are being used. For instance, the government provides tax incentives and cheap rent, improves public transports, and offers an environment where start-up companies, creative industries, and multinationals may co-create, share a common infrastructure and labor markets, and take advantage of locally embedded technologies, production process, and transaction costs.[111]
TheInternet of Military Things (IoMT)is the application of IoT technologies in the military domain for the purposes of reconnaissance, surveillance, and other combat-related objectives. It is heavily influenced by the future prospects of warfare in an urban environment and involves the use of sensors,munitions, vehicles, robots, human-wearable biometrics, and other smart technology that is relevant on the battlefield.[114]
One of the examples of IOT devices used in the military is Xaver 1000 system. The Xaver 1000 was developed by Israel's Camero Tech, which is the latest in the company's line of "through wall imaging systems". The Xaver line uses millimeter wave (MMW) radar, or radar in the range of 30-300 gigahertz. It is equipped with an AI-based life target tracking system as well as its own 3D 'sense-through-the-wall' technology.[115]
TheInternet of Battlefield Things(IoBT) is a project initiated and executed by theU.S. Army Research Laboratory (ARL)that focuses on the basic science related to the IoT that enhance the capabilities of Army soldiers.[116]In 2017, ARL launched theInternet of Battlefield Things Collaborative Research Alliance (IoBT-CRA), establishing a working collaboration between industry, university, and Army researchers to advance the theoretical foundations of IoT technologies and their applications to Army operations.[117][118]
TheOcean of Thingsproject is aDARPA-led program designed to establish an Internet of things across large ocean areas for the purposes of collecting, monitoring, and analyzing environmental and vessel activity data. The project entails the deployment of about 50,000 floats that house a passive sensor suite that autonomously detect and track military and commercial vessels as part of a cloud-based network.[119]
There are several applications of smart oractive packagingin which aQR codeorNFC tagis affixed on a product or its packaging. The tag itself is passive, however, it contains aunique identifier(typically aURL) which enables a user to access digital content about the product via a smartphone.[120]Strictly speaking, such passive items are not part of the Internet of things, but they can be seen as enablers of digital interactions.[121]The term "Internet of Packaging" has been coined to describe applications in which unique identifiers are used, to automate supply chains, and are scanned on large scale by consumers to access digital content.[122]Authentication of the unique identifiers, and thereby of the product itself, is possible via a copy-sensitivedigital watermarkorcopy detection patternfor scanning when scanning a QR code,[123]while NFC tags can encrypt communication.[124]
The IoT's major significant trend in recent years[when?]is the growth of devices connected and controlled via the Internet.[125]The wide range of applications for IoT technology mean that the specifics can be very different from one device to the next but there are basic characteristics shared by most.
The IoT creates opportunities for more direct integration of the physical world into computer-based systems, resulting in efficiency improvements, economic benefits, and reduced human exertions.[126][127][128][129]
IoT Analytics reported there were 16.6 billion IoT devices connected in 2023. In 2020, the same firm projected there would be 30 billion devices connected by 2025. As of October, 2024, there are around 17 billion.[130][131][132]
Ambient intelligenceand autonomous control are not part of the original concept of the Internet of things. Ambient intelligence and autonomous control do not necessarily require Internet structures, either. However, there is a shift in research (by companies such asIntel) to integrate the concepts of the IoT and autonomous control, with initial outcomes towards this direction considering objects as the driving force for autonomous IoT.[133]An approach in this context isdeep reinforcement learningwhere most of IoT systems provide a dynamic and interactive environment.[134]Training an agent (i.e., IoT device) to behave smartly in such an environment cannot be addressed by conventional machine learning algorithms such assupervised learning. By reinforcement learning approach, a learning agent can sense the environment's state (e.g., sensing home temperature), perform actions (e.g., turnHVACon or off) and learn through the maximizing accumulated rewards it receives in long term.
IoT intelligence can be offered at three levels: IoT devices,Edge/Fog nodes, andcloud computing.[135]The need for intelligent control and decision at each level depends on the time sensitiveness of the IoT application. For example, an autonomous vehicle's camera needs to make real-timeobstacle detectionto avoid an accident. This fast decision making would not be possible through transferring data from the vehicle to cloud instances and return the predictions back to the vehicle. Instead, all the operation should be performed locally in the vehicle. Integrating advanced machine learning algorithms includingdeep learninginto IoT devices is an active research area to make smart objects closer to reality. Moreover, it is possible to get the most value out of IoT deployments through analyzing IoT data, extracting hidden information, and predicting control decisions. A wide variety of machine learning techniques have been used in IoT domain ranging from traditional methods such asregression,support vector machine, andrandom forestto advanced ones such asconvolutional neural networks,LSTM, andvariational autoencoder.[136][135]
In the future, the Internet of things may be a non-deterministic and open network in which auto-organized or intelligent entities (web services,SOAcomponents) and virtual objects (avatars) will be interoperable and able to act independently (pursuing their own objectives or shared ones) depending on the context, circumstances or environments. Autonomous behavior through the collection and reasoning of context information as well as the object's ability to detect changes in the environment (faults affecting sensors) and introduce suitable mitigation measures constitutes a major research trend,[137]clearly needed to provide credibility to the IoT technology. Modern IoT products and solutions in the marketplace use a variety of different technologies to support suchcontext-awareautomation, but more sophisticated forms of intelligence are requested to permit sensor units and intelligentcyber-physical systemsto be deployed in real environments.[138]
IoT system architecture, in its simplistic view, consists of three tiers: Tier 1: Devices, Tier 2: theEdgeGateway, and Tier 3: the Cloud.[139]Devices include networked things, such as the sensors and actuators found in IoT equipment, particularly those that use protocols such asModbus,Bluetooth,Zigbee, or proprietary protocols, to connect to an Edge Gateway.[139]The Edge Gateway layer consists of sensor data aggregation systems called Edge Gateways that provide functionality, such as pre-processing of the data, securing connectivity to cloud, using systems such as WebSockets, the event hub, and, even in some cases, edge analytics orfog computing.[139]Edge Gateway layer is also required to give a common view of the devices to the upper layers to facilitate in easier management. The final tier includes the cloud application built for IoT using the microservices architecture, which are usually polyglot and inherently secure in nature using HTTPS/OAuth. It includes variousdatabasesystems that store sensor data, such as time series databases or asset stores using backend data storage systems (e.g. Cassandra, PostgreSQL).[139]The cloud tier in most cloud-based IoT system features event queuing and messaging system that handles communication that transpires in all tiers.[140]Some experts classified the three-tiers in the IoT system as edge, platform, and enterprise and these are connected by proximity network, access network, and service network, respectively.[141]
Building on the Internet of things, theweb of thingsis an architecture for the application layer of the Internet of things looking at the convergence of data from IoT devices into Web applications to create innovative use-cases. In order to program and control the flow of information in the Internet of things, a predicted architectural direction is being calledBPM Everywherewhich is a blending of traditional process management with process mining and special capabilities to automate the control of large numbers of coordinated devices.[citation needed]
The Internet of things requires huge scalability in the network space to handle the surge of devices.[142]IETF 6LoWPANcan be used to connect devices to IP networks. With billions of devices[143]being added to the Internet space,IPv6will play a major role in handling the network layer scalability.IETF's Constrained Application Protocol,ZeroMQ, andMQTTcan provide lightweight data transport. In practice many groups of IoT devices are hidden behind gateway nodes and may not have unique addresses. Also the vision of everything-interconnected is not needed for most applications as it is mainly the data which need interconnecting at a higher layer.[citation needed]
Fog computing is a viable alternative to prevent such a large burst of data flow through the Internet.[144]Theedge devices' computation power to analyze and process data is extremely limited. Limited processing power is a key attribute of IoT devices as their purpose is to supply data about physical objects while remaining autonomous. Heavy processing requirements use more battery power harming IoT's ability to operate. Scalability is easy because IoT devices simply supply data through the Internet to a server with sufficient processing power.[145]
Decentralized Internet of things, or decentralized IoT, is a modified IoT which utilizes fog computing to handle and balance requests of connected IoT devices in order to reduce loading on the cloud servers and improve responsiveness for latency-sensitive IoT applications like vital signs monitoring of patients, vehicle-to-vehicle communication of autonomous driving, and critical failure detection of industrial devices.[146]Performance is improved, especially for huge IoT systems with millions of nodes.[147]
Conventional IoT is connected via a mesh network and led by a major head node (centralized controller).[148]The head node decides how a data is created, stored, and transmitted.[149]In contrast, decentralized IoT attempts to divide IoT systems into smaller divisions.[150]The head node authorizes partial decision-making power to lower level sub-nodes under mutual agreed policy.[151]
Some approached to decentralized IoT attempts to address the limitedbandwidthand hashing capacity of battery powered or wireless IoT devices viablockchain.[152][153][154]
In semi-open or closed loops (i.e., value chains, whenever a global finality can be settled) the IoT will often be considered and studied as acomplex system[155]due to the huge number of different links, interactions between autonomous actors, and its capacity to integrate new actors. At the overall stage (full open loop) it will likely be seen as achaoticenvironment (sincesystemsalways have finality).
As a practical approach, not all elements on the Internet of things run in a global, public space. Subsystems are often implemented to mitigate the risks of privacy, control and reliability. For example, domestic robotics (domotics) running inside a smart home might only share data within and be available via alocal network.[156]Managing and controlling a high dynamic ad hoc IoT things/devices network is a tough task with the traditional networks architecture,software-defined networking(SDN) provides the agile dynamic solution that can cope with the special requirements of the diversity of innovative IoT applications.[157][158]
The exact scale of the Internet of things is unknown, with quotes of billions or trillions often quoted at the beginning of IoT articles. In 2015 there were 83 million smart devices in people's homes. This number is expected to grow to 193 million devices by 2020.[43][159]In 2023, the number of connected IoT devices will reach 16.6 billion.[160]
The figure of online capable devices grew 31% from 2016 to 2017 to reach 8.4 billion.[161]
In the Internet of things, the precise geographic location of a thing—and also the precise geographic dimensions of a thing—can be critical.[162]Therefore, facts about a thing, such as its location in time and space, have been less critical to track because the person processing the information can decide whether or not that information was important to the action being taken, and if so, add the missing information (or decide to not take the action). (Note that some things on the Internet of things will be sensors, and sensor location is usually important.[163]) TheGeoWebandDigital Earthare applications that become possible when things can become organized and connected by location. However, the challenges that remain include the constraints of variable spatial scales, the need to handle massive amounts of data, and an indexing for fast search and neighbour operations. On the Internet of things, if things are able to take actions on their own initiative, this human-centric mediation role is eliminated. Thus, the time-space context that we as humans take for granted must be given a central role in this informationecosystem. Just as standards play a key role on the Internet and the Web, geo-spatial standards will play a key role on the Internet of things.[164][165]
Many IoT devices have the potential to take a piece of this market.Jean-Louis Gassée(Apple initial alumni team, and BeOS co-founder) has addressed this topic in an article onMonday Note,[166]where he predicts that the most likely problem will be what he calls the "basket of remotes" problem, where we'll have hundreds of applications to interface with hundreds of devices that don't share protocols for speaking with one another.[166]For improved user interaction, some technology leaders are joining forces to create standards for communication between devices to solve this problem. Others are turning to the concept of predictive interaction of devices, "where collected data is used to predict and trigger actions on the specific devices" while making them work together.[167]
Social Internet of things (SIoT) is a new kind of IoT that focuses the importance of social interaction and relationship between IoT devices.[168]SIoT is a pattern of how cross-domain IoT devices enabling application to application communication and collaboration without human intervention in order to serve their owners with autonomous services,[169]and this only can be realized when gained low-level architecture support from both IoT software and hardware engineering.[170]
IoT defines a device with an identity like a citizen in a community and connect them to the Internet to provide services to its users.[171]SIoT defines a social network for IoT devices only to interact with each other for different goals that to serve human.[172]
SIoT is different from the original IoT in terms of the collaboration characteristics. IoT is passive, it was set to serve for dedicated purposes with existing IoT devices in predetermined system. SIoT is active, it was programmed and managed by AI to serve for unplanned purposes with mix and match of potential IoT devices from different systems that benefit its users.[173]
IoT devices built-in with sociability will broadcast their abilities or functionalities, and at the same time discovers, shares information, monitors, navigates and groups with other IoT devices in the same or nearby network realizing SIoT[174]and facilitating useful service compositions in order to help its users proactively in every day's life especially during emergency.[175]
There are many technologies that enable the IoT. Crucial to the field is the network used to communicate between devices of an IoT installation, a role that several wireless or wired technologies may fulfill:[182][183][184]
The original idea of theAuto-ID Centeris based on RFID-tags and distinct identification through theElectronic Product Code. This has evolved into objects having an IP address orURI.[185]An alternative view, from the world of theSemantic Web[186]focuses instead on making all things (not just those electronic, smart, or RFID-enabled) addressable by the existing naming protocols, such asURI. The objects themselves do not converse, but they may now be referred to by other agents, such as powerful centralised servers acting for their human owners.[187]Integration with the Internet implies that devices will use anIP addressas a distinct identifier. Due to thelimited address spaceofIPv4(which allows for 4.3 billion different addresses), objects in the IoT will have to usethe next generationof the Internet protocol (IPv6) to scale to the extremely large address space required.[188][189][190]Internet-of-things devices additionally will benefit from the stateless address auto-configuration present in IPv6,[191]as it reduces the configuration overhead on the hosts,[189]and theIETF 6LoWPANheader compression. To a large extent, the future of the Internet of things will not be possible without the support of IPv6; and consequently, the global adoption of IPv6 in the coming years will be critical for the successful development of the IoT in the future.[190]
Different technologies have different roles in aprotocol stack. Below is a simplified[notes 1]presentation of the roles of several popular communication technologies in IoT applications:
This is a list oftechnical standardsfor the IoT, most of which areopen standards, and thestandards organizationsthat aspire to successfully setting them.[206][207]
The GS1 digital link standard,[211]first released in August 2018, allows the use QR Codes, GS1 Datamatrix, RFID and NFC to enable various types of business-to-business, as well as business-to-consumers interactions.
Some scholars and activists argue that the IoT can be used to create new models ofcivic engagementif device networks can be open to user control and inter-operable platforms.Philip N. Howard, a professor and author, writes that political life in both democracies and authoritarian regimes will be shaped by the way the IoT will be used for civic engagement. For that to happen, he argues that any connected device should be able to divulge a list of the "ultimate beneficiaries" of its sensor data and that individual citizens should be able to add new organisations to the beneficiary list. In addition, he argues that civil society groups need to start developing their IoT strategy for making use of data and engaging with the public.[213]
One of the key drivers of the IoT is data. The success of the idea of connecting devices to make them more efficient is dependent upon access to and storage & processing of data. For this purpose, companies working on the IoT collect data from multiple sources and store it in their cloud network for further processing. This leaves the door wide open for privacy and security dangers and single point vulnerability of multiple systems.[214]The other issues pertain to consumer choice and ownership of data[215]and how it is used. Though still in their infancy, regulations and governance regarding these issues of privacy, security, and data ownership continue to develop.[216][217][218]IoT regulation depends on the country. Some examples of legislation that is relevant to privacy and data collection are: the US Privacy Act of 1974, OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data of 1980, and the EU Directive 95/46/EC of 1995.[219]
Current regulatory environment:
A report published by theFederal Trade Commission(FTC) in January 2015 made the following three recommendations:[220]
However, the FTC stopped at just making recommendations for now. According to an FTC analysis, the existing framework, consisting of theFTC Act, theFair Credit Reporting Act, and theChildren's Online Privacy Protection Act, along with developing consumer education and business guidance, participation in multi-stakeholder efforts and advocacy to other agencies at the federal, state and local level, is sufficient to protect consumer rights.[222]
A resolution passed by the Senate in March 2015, is already being considered by the Congress.[223]This resolution recognized the need for formulating a National Policy on IoT and the matter of privacy, security and spectrum. Furthermore, to provide an impetus to the IoT ecosystem, in March 2016, a bipartisan group of four Senators proposed a bill, The Developing Innovation and Growing the Internet of Things (DIGIT) Act, to direct theFederal Communications Commissionto assess the need for more spectrum to connect IoT devices.
Approved on 28 September 2018, California Senate Bill No. 327[224]goes into effect on 1 January 2020. The bill requires "a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure,"
Several standards for the IoT industry are actually being established relating to automobiles because most concerns arising from use of connected cars apply to healthcare devices as well. In fact, theNational Highway Traffic Safety Administration(NHTSA) is preparing cybersecurity guidelines and a database of best practices to make automotive computer systems more secure.[225]
A recent report from the World Bank examines the challenges and opportunities in government adoption of IoT.[226]These include –
In early December 2021, the U.K. government introduced theProduct Security and Telecommunications Infrastructure bill(PST), an effort to legislate IoT distributors, manufacturers, and importers to meet certaincybersecurity standards. The bill also seeks to improve the security credentials of consumer IoT devices.[227]
The IoT suffers fromplatform fragmentation, lack of interoperability and commontechnical standards[228][229][230][231][232][233][234][excessive citations]a situation where the variety of IoT devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technologyecosystemshard.[1]For example, wireless connectivity for IoT devices can be done usingBluetooth,Wi-Fi,Wi-Fi HaLow,Zigbee,Z-Wave,LoRa,NB-IoT,Cat M1as well as completely custom proprietary radios – each with its own advantages and disadvantages; and unique support ecosystem.[235]
The IoT'samorphous computingnature is also a problem for security, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices.[236][237][238]One set of researchers says that the failure of vendors to support older devices with patches and updates leaves more than 87% of active Android devices vulnerable.[239][240]
Philip N. Howard, a professor and author, writes that the Internet of things offers immense potential for empowering citizens, making government transparent, and broadeninginformation access. Howard cautions, however, that privacy threats are enormous, as is the potential for social control and political manipulation.[241]
Concerns about privacy have led many to consider the possibility thatbig datainfrastructures such as the Internet of things anddata miningare inherently incompatible with privacy.[242]Key challenges of increased digitalization in the water, transport or energy sector are related to privacy andcybersecuritywhich necessitate an adequate response from research and policymakers alike.[243]
WriterAdam Greenfieldclaims that IoT technologies are not only an invasion of public space but are also being used to perpetuate normative behavior, citing an instance of billboards with hidden cameras that tracked the demographics of passersby who stopped to read the advertisement.
The Internet of Things Council compared the increased prevalence ofdigital surveillancedue to the Internet of things to the concept of thepanopticondescribed byJeremy Benthamin the 18th century.[244]The assertion is supported by the works of French philosophersMichel FoucaultandGilles Deleuze. InDiscipline and Punish: The Birth of the Prison, Foucault asserts that the panopticon was a central element of the discipline society developed during theIndustrial Era.[245]Foucault also argued that the discipline systems established in factories and school reflected Bentham's vision ofpanopticism.[245]In his 1992 paper "Postscripts on the Societies of Control", Deleuze wrote that the discipline society had transitioned into a control society, with thecomputerreplacing thepanopticonas an instrument of discipline and control while still maintaining the qualities similar to that of panopticism.[246]
Peter-Paul Verbeek, a professor of philosophy of technology at theUniversity of Twente, Netherlands, writes that technology already influences our moral decision making, which in turn affects human agency, privacy and autonomy. He cautions against viewing technology merely as a human tool and advocates instead to consider it as an active agent.[247]
Justin Brookman, of theCenter for Democracy and Technology, expressed concern regarding the impact of the IoT onconsumer privacy, saying that "There are some people in the commercial space who say, 'Oh, big data – well, let's collect everything, keep it around forever, we'll pay for somebody to think about security later.' The question is whether we want to have some sort of policy framework in place to limit that."[248]
Tim O'Reillybelieves that the way companies sell the IoT devices on consumers are misplaced, disputing the notion that the IoT is about gaining efficiency from putting all kinds of devices online and postulating that the "IoT is really about human augmentation. The applications are profoundly different when you have sensors and data driving the decision-making."[249]
Editorials atWIREDhave also expressed concern, one stating "What you're about to lose is your privacy. Actually, it's worse than that. You aren't just going to lose your privacy, you're going to have to watch the very concept of privacy be rewritten under your nose."[250]
TheAmerican Civil Liberties Union(ACLU) expressed concern regarding the ability of IoT to erode people's control over their own lives. The ACLU wrote that "There's simply no way to forecast how these immense powers – disproportionately accumulating in the hands of corporations seeking financial advantage and governments craving ever more control – will be used. Chances are big data and the Internet of Things will make it harder for us to control our own lives, as we grow increasingly transparent to powerful corporations and government institutions that are becoming more opaque to us."[251]
In response to rising concerns about privacy andsmart technology, in 2007 theBritish Governmentstated it would follow formalPrivacy by Designprinciples when implementing their smart metering program. The program would lead to replacement of traditionalpower meterswith smart power meters, which could track and manage energy usage more accurately.[252]However theBritish Computer Societyis doubtful these principles were ever actually implemented.[253]In 2009 theDutch Parliamentrejected a similar smart metering program, basing their decision on privacy concerns. The Dutch program later revised and passed in 2011.[253]
A challenge for producers of IoT applications is toclean, process and interpret the vast amount of data which is gathered by the sensors. There is a solution proposed for the analytics of the information referred to as Wireless Sensor Networks.[254]These networks share data among sensor nodes that are sent to a distributed system for the analytics of the sensory data.[255]
Another challenge is the storage of this bulk data. Depending on the application, there could be high data acquisition requirements, which in turn lead to high storage requirements. In 2013, the Internet was estimated to be responsible for consuming 5% of the total energy produced,[254]and a "daunting challenge to power" IoT devices to collect and even store data still remains.[256]
Data silos, although a common challenge of legacy systems, still commonly occur with the implementation of IoT devices, particularly within manufacturing. As there are a lot of benefits to be gained from IoT and IIoT devices, the means in which the data is stored can present serious challenges without the principles of autonomy, transparency, and interoperability being considered.[257]The challenges do not occur by the device itself, but the means in which databases and data warehouses are set-up. These challenges were commonly identified in manufactures and enterprises which have begun upon digital transformation, and are part of the digital foundation, indicating that in order to receive the optimal benefits from IoT devices and for decision making, enterprises will have to first re-align their data storing methods. These challenges were identified by Keller (2021) when investigating the IT and application landscape of I4.0 implementation within German M&E manufactures.[257]
Security is the biggest concern in adopting Internet of things technology,[258]with concerns that rapid development is happening without appropriate consideration of the profound security challenges involved[259]and the regulatory changes that might be necessary.[260][261]The rapid development of the Internet of Things (IoT) has allowed billions of devices to connect to the network. Due to too many connected devices and the limitation of communication security technology, various security issues gradually appear in the IoT.[262]
Most of the technical security concerns are similar to those of conventional servers, workstations and smartphones.[263]These concerns include using weak authentication, forgetting to change default credentials, unencrypted messages sent between devices,SQL injections,man-in-the-middle attacks, and poor handling of security updates.[264][265]However, many IoT devices have severe operational limitations on the computational power available to them. These constraints often make them unable to directly use basic security measures such as implementing firewalls or using strong cryptosystems to encrypt their communications with other devices[266]- and the low price and consumer focus of many devices makes a robust security patching system uncommon.[267]
Rather than conventional security vulnerabilities, fault injection attacks are on the rise and targeting IoT devices. A fault injection attack is a physical attack on a device to purposefully introduce faults in the system to change the intended behavior. Faults might happen unintentionally by environmental noises and electromagnetic fields. There are ideas stemmed from control-flow integrity (CFI) to prevent fault injection attacks and system recovery to a healthy state before the fault.[268]
Internet of things devices also have access to new areas of data, and can often control physical devices,[269]so that even by 2014 it was possible to say that many Internet-connected appliances could already "spy on people in their own homes" including televisions, kitchen appliances,[270]cameras, and thermostats.[271]Computer-controlled devices in automobiles such as brakes, engine, locks, hood and trunk releases, horn, heat, and dashboard have been shown to be vulnerable to attackers who have access to the on-board network. In some cases, vehicle computer systems are Internet-connected, allowing them to be exploited remotely.[272]By 2008 security researchers had shown the ability to remotely control pacemakers without authority. Later hackers demonstrated remote control of insulin pumps[273]and implantable cardioverter defibrillators.[274]
Poorly secured Internet-accessible IoT devices can also be subverted to attack others. In 2016, adistributed denial of service attackpowered by Internet of things devices running theMiraimalwaretook down a DNS provider and major web sites.[275]TheMirai Botnethad infected roughly 65,000 IoT devices within the first 20 hours.[276]Eventually the infections increased to around 200,000 to 300,000 infections.[276]Brazil, Colombia and Vietnam made up of 41.5% of the infections.[276]The Mirai Botnet had singled out specific IoT devices that consisted of DVRs, IP cameras, routers and printers.[276]Top vendors that contained the most infected devices were identified as Dahua, Huawei, ZTE, Cisco, ZyXEL and MikroTik.[276]In May 2017, Junade Ali, a computer scientist atCloudflarenoted that native DDoS vulnerabilities exist in IoT devices due to a poor implementation of thePublish–subscribe pattern.[277][278]These sorts of attacks have caused security experts to view IoT as a real threat to Internet services.[279]
The U.S.National Intelligence Councilin an unclassified report maintains that it would be hard to deny "access to networks of sensors and remotely-controlled objects by enemies of the United States, criminals, and mischief makers... An open market for aggregated sensor data could serve the interests of commerce and security no less than it helps criminals and spies identify vulnerable targets. Thus, massively parallelsensor fusionmay undermine social cohesion, if it proves to be fundamentally incompatible with Fourth-Amendment guarantees against unreasonable search."[280]In general, the intelligence community views the Internet of things as a rich source of data.[281]
On 31 January 2019,The Washington Postwrote an article regarding the security and ethical challenges that can occur with IoT doorbells and cameras: "Last month, Ring got caught allowing its team in Ukraine to view and annotate certain user videos; the company says it only looks at publicly shared videos and those from Ring owners who provide consent. Just last week, a California family's Nest camera let a hacker take over and broadcast fake audio warnings about a missile attack, not to mention peer in on them, when they used a weak password."[282]
There have been a range of responses to concerns over security. The Internet of Things Security Foundation (IoTSF) was launched on 23 September 2015 with a mission to secure the Internet of things by promoting knowledge and best practice. Its founding board is made from technology providers and telecommunications companies. In addition, large IT companies are continually developing innovative solutions to ensure the security of IoT devices. In 2017, Mozilla launchedProject Things, which allows to route IoT devices through a safe Web of Things gateway.[283]As per the estimates from KBV Research,[284]the overall IoT security market[285]would grow at 27.9% rate during 2016–2022 as a result of growing infrastructural concerns and diversified usage of Internet of things.[286][287]
Governmental regulation is argued by some to be necessary to secure IoT devices and the wider Internet – as market incentives to secure IoT devices is insufficient.[288][260][261]It was found that due to the nature of most of the IoT development boards, they generate predictable and weak keys which make it easy to be utilized byman-in-the-middle attack. However, various hardening approaches were proposed by many researchers to resolve the issue of SSH weak implementation and weak keys.[289]
IoT security within the field of manufacturing presents different challenges, and varying perspectives. Within the EU and Germany, data protection is constantly referenced throughout manufacturing and digital policy particularly that of I4.0. However, the attitude towards data security differs from the enterprise perspective whereas there is an emphasis on less data protection in the form of GDPR as the data being collected from IoT devices in the manufacturing sector does not display personal details.[257]Yet, research has indicated that manufacturing experts are concerned about "data security for protecting machine technology from international competitors with the ever-greater push for interconnectivity".[257]
IoT systems are typically controlled by event-driven smart apps that take as input either sensed data, user inputs, or other external triggers (from the Internet) and command one or more actuators towards providing different forms of automation.[290]Examples of sensors include smoke detectors, motion sensors, and contact sensors. Examples of actuators include smart locks, smart power outlets, and door controls. Popular control platforms on which third-party developers can build smart apps that interact wirelessly with these sensors and actuators include Samsung's SmartThings,[291]Apple's HomeKit,[292]and Amazon's Alexa,[293]among others.
A problem specific to IoT systems is that buggy apps, unforeseen bad app interactions, or device/communication failures, can cause unsafe and dangerous physical states, e.g., "unlock the entrance door when no one is at home" or "turn off the heater when the temperature is below 0 degrees Celsius and people are sleeping at night".[290]Detecting flaws that lead to such states, requires a holistic view of installed apps, component devices, their configurations, and more importantly, how they interact. Recently, researchers from the University of California Riverside have proposed IotSan, a novel practical system that uses model checking as a building block to reveal "interaction-level" flaws by identifying events that can lead the system to unsafe states.[290]They have evaluated IotSan on the Samsung SmartThings platform. From 76 manually configured systems, IotSan detects 147 vulnerabilities (i.e., violations of safe physical states/properties).
Given widespread recognition of the evolving nature of the design and management of the Internet of things, sustainable and secure deployment of IoT solutions must design for "anarchic scalability".[294]Application of the concept of anarchic scalability can be extended to physical systems (i.e. controlled real-world objects), by virtue of those systems being designed to account for uncertain management futures. This hard anarchic scalability thus provides a pathway forward to fully realize the potential of Internet-of-things solutions by selectively constraining physical systems to allow for all management regimes without risking physical failure.[294]
Brown University computer scientistMichael Littmanhas argued that successful execution of the Internet of things requires consideration of the interface's usability as well as the technology itself. These interfaces need to be not only more user-friendly but also better integrated: "If users need to learn different interfaces for their vacuums, their locks, their sprinklers, their lights, and their coffeemakers, it's tough to say that their lives have been made any easier."[295]
A concern regarding Internet-of-things technologies pertains to the environmental impacts of the manufacture, use, and eventual disposal of all these semiconductor-rich devices.[296]Modern electronics are replete with a wide variety of heavy metals and rare-earth metals, as well as highly toxic synthetic chemicals. This makes them extremely difficult to properly recycle. Electronic components are often incinerated or placed in regular landfills. Furthermore, the human and environmental cost of mining the rare-earth metals that are integral to modern electronic components continues to grow. This leads to societal questions concerning the environmental impacts of IoT devices over their lifetime.[297]
TheElectronic Frontier Foundationhas raised concerns that companies can use the technologies necessary to support connected devices to intentionally disable or "brick" their customers' devices via a remote software update or by disabling a service necessary to the operation of the device. In one example,home automationdevices sold with the promise of a "Lifetime Subscription" were rendered useless afterNest Labsacquired Revolv and made the decision to shut down the central servers the Revolv devices had used to operate.[298]As Nest is a company owned byAlphabet(Google's parent company), the EFF argues this sets a "terrible precedent for a company with ambitions to sell self-driving cars, medical devices, and other high-end gadgets that may be essential to a person's livelihood or physical safety."[299]
Owners should be free to point their devices to a different server or collaborate on improved software. But such action violates the United StatesDMCAsection 1201, which only has an exemption for "local use". This forces tinkerers who want to keep using their own equipment into a legal grey area. EFF thinks buyers should refuse electronics and software that prioritize the manufacturer's wishes above their own.[299]
Examples of post-sale manipulations includeGoogle NestRevolv, disabled privacy settings onAndroid, Sony disablingLinuxonPlayStation 3, and enforcedEULAonWii U.[299]
Kevin Lonergan atInformation Age, a business technology magazine, has referred to the terms surrounding the IoT as a "terminology zoo".[300]The lack of clear terminology is not "useful from a practical point of view" and a "source of confusion for the end user".[300]A company operating in the IoT space could be working in anything related to sensor technology, networking, embedded systems, or analytics.[300]According to Lonergan, the term IoT was coined before smart phones, tablets, and devices as we know them today existed, and there is a long list of terms with varying degrees of overlap andtechnological convergence: Internet of things, Internet of everything (IoE), Internet of goods (supply chain), industrial Internet,pervasive computing, pervasive sensing,ubiquitous computing,cyber-physical systems(CPS),wireless sensor networks(WSN),smart objects,digital twin, cyberobjects or avatars,[155]cooperating objects,machine to machine(M2M), ambient intelligence (AmI),Operational technology(OT), andinformation technology(IT).[300]Regarding IIoT, an industrial sub-field of IoT, theIndustrial Internet Consortium's Vocabulary Task Group has created a "common and reusable vocabulary of terms"[301]to ensure "consistent terminology"[301][302]across publications issued by the Industrial Internet Consortium. IoT One has created an IoT Terms Database including a New Term Alert[303]to be notified when a new term is published. As of March 2020[update], this database aggregates 807 IoT-related terms, while keeping material "transparent and comprehensive".[304][305]
Despite a shared belief in the potential of the IoT, industry leaders and consumers are facing barriers to adopt IoT technology more widely. Mike Farley argued inForbesthat while IoT solutions appeal toearly adopters, they either lack interoperability or a clear use case for end-users.[306]A study by Ericsson regarding the adoption of IoT among Danish companies suggests that many struggle "to pinpoint exactly where the value of IoT lies for them".[307]
As for IoT, especially in regards to consumer IoT, information about a user's daily routine is collected so that the "things" around the user can cooperate to provide better services that fulfill personal preference.[308]When the collected information which describes a user in detail travels through multiplehops in a network, due to a diverse integration of services, devices and network, the information stored on a device is vulnerable toprivacy violationby compromising nodes existing in an IoT network.[309]
For example, on 21 October 2016, a multipledistributed denial of service(DDoS) attacks systems operated bydomain name systemprovider Dyn, which caused the inaccessibility of several websites, such asGitHub,Twitter, and others. This attack is executed through abotnetconsisting of a large number of IoT devices including IP cameras,gateways, and even baby monitors.[310]
Fundamentally there are 4 security objectives that the IoT system requires: (1) dataconfidentiality: unauthorised parties cannot have access to the transmitted and stored data; (2) dataintegrity: intentional and unintentionalcorruptionof transmitted and stored data must be detected; (3)non-repudiation: the sender cannot deny having sent a given message; (4) data availability: the transmitted and stored data should be available to authorised parties even with thedenial-of-service(DOS) attacks.[311]
Information privacy regulations also require organisations to practice "reasonable security".California's SB-327 Information privacy: connected devices"would require a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorised access, destruction, use, modification, or disclosure, as specified".[312]As each organisation's environment is unique, it can prove challenging to demonstrate what "reasonable security" is and what potential risks could be involved for the business. Oregon's HB2395 also "requires [aperson that manufactures, sells or offers to sell connected device]manufacturerto equip connected device with reasonable security features that protectconnected deviceand information that connected device [collects, contains, stores or transmits]storesfrom access, destruction, modification, use or disclosure that consumer does not authorise."[313]
According to antivirus providerKaspersky, there were 639 million data breaches of IoT devices in 2020 and 1.5 billion breaches in the first six months of 2021.[227]
One method of overcoming the barrier of safety issues is the introduction of standards and certification of devices. In 2024, two voluntary and non-competing programs were proposed and launched in the United States: the US Cyber Trust Mark fromThe Federal Communications Commissionand CSA's IoT Device Security Specification from theConnectivity Standards Alliance. The programs incorporate international expertise, with the CSA mark recognized by the Singapore Cybersecurity Agency. Compliance means that IoT devices can resist hacking, control hijacking and theft of confidential data.
A study issued by Ericsson regarding the adoption of Internet of things among Danish companies identified a "clash between IoT and companies' traditionalgovernancestructures, as IoT still presents both uncertainties and a lack of historical precedence."[307]Among the respondents interviewed, 60 percent stated that they "do not believe they have the organizational capabilities, and three of four do not believe they have the processes needed, to capture the IoT opportunity."[307]This has led to a need to understandorganizational culturein order to facilitateorganizational designprocesses and to test newinnovation managementpractices. A lack of digital leadership in the age ofdigital transformationhas also stifled innovation and IoT adoption to a degree that many companies, in the face of uncertainty, "were waiting for the market dynamics to play out",[307]or further action in regards to IoT "was pending competitor moves, customer pull, or regulatory requirements".[307]Some of these companies risk being "kodaked" – "Kodak was a market leader until digital disruption eclipsed film photography with digital photos" – failing to "see the disruptive forces affecting their industry"[314]and "to truly embrace the new business models the disruptive change opens up".[314]Scott Anthony has written inHarvard Business Reviewthat Kodak "created a digital camera, invested in the technology, and even understood that photos would be shared online"[314]but ultimately failed to realize that "online photo sharingwasthe new business, not just a way to expand the printing business."[314]
According to 2018 study, 70–75% of IoT deployments were stuck in the pilot or prototype stage, unable to reach scale due in part to a lack of business planning.[315][page needed][316]
Even though scientists, engineers, and managers across the world are continuously working to create and exploit the benefits of IoT products, there are some flaws in the governance, management and implementation of such projects. Despite tremendous forward momentum in the field of information and other underlying technologies, IoT still remains a complex area and the problem of how IoT projects are managed still needs to be addressed. IoT projects must be run differently than simple and traditional IT, manufacturing or construction projects. Because IoT projects have longer project timelines, a lack of skilled resources and several security/legal issues, there is a need for new and specifically designed project processes. The following management techniques should improve the success rate of IoT projects:[317]
|
https://en.wikipedia.org/wiki/Internet_of_Things
|
Digital physicsis a speculative idea suggesting that theuniversecan be conceived of as a vast, digital computation device, or as the output of adeterministicorprobabilisticcomputer program.[1]The hypothesis that the universe is adigital computerwas proposed byKonrad Zusein his 1969 bookRechnender Raum[2](Calculating-space).[3]The term "digital physics" was coined in 1978 byEdward Fredkin,[4]who later came to prefer the term "digital philosophy".[5]Fredkin taught a graduate course called "digital physics" at MIT in 1978, and collaborated withTommaso Toffolion "conservative logic" whileNorman Margolusserved as a graduate student in his research group.[6]
Digital physicsposits that there exists, at least in principle, aprogramfor auniversal computerthat computes theevolutionof theuniverse. The computer could be, for example, a hugecellular automaton.[1][7]It is deeply connected to the concept ofinformation theory, particularly the idea that the universe's fundamental building blocks might be bits of information rather than traditional particles or fields.
However, extant models of digital physics face challenges, particularly in reconciling with several continuoussymmetries[8]in physical laws, e.g.,rotational symmetry,translational symmetry,Lorentz symmetry, and theLie groupgauge invariance ofYang–Mills theories, all of which are central to current physical theories. Moreover, existing models of digital physics violate various well-established features ofquantum physics, as they belong to a class of theories involving localhidden variables. These models have so far been disqualified experimentally by physicists usingBell's theorem.[9][10]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Digital_physics
|
TheSortino ratiomeasures therisk-adjusted returnof an investmentasset,portfolio, orstrategy.[1]It is a modification of theSharpe ratiobut penalizes only those returns falling below a user-specified target or requiredrate of return, while the Sharpe ratio penalizes both upside and downsidevolatilityequally. Though both ratios measure an investment's risk-adjusted return, they do so in significantly different ways that will frequently lead to differing conclusions as to the true nature of the investment's return-generating efficiency.
The Sortino ratio is used as a way to compare the risk-adjusted performance of programs with differing risk and return profiles. In general, risk-adjusted returns seek to normalize the risk across programs and then see which has the higher return unit per risk.[2]
The ratioS{\displaystyle S}is calculated as
whereR{\displaystyle R}is the asset or portfolio average realized return,T{\displaystyle T}is the target or required rate of return for the investment strategy under consideration (originally called the minimum acceptable returnMAR), andDR{\displaystyle DR}is the target semi-deviation (the square root of target semi-variance), termed downside deviation.DR{\displaystyle DR}is expressed in percentages and therefore allows for rankings in the same way asstandard deviation.
An intuitive way to view downside risk is the annualized standard deviation of returns below the target. Another is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures at a quadratic rate. This is consistent with observations made on the behavior of individual decision making under uncertainty.
Here
DR{\displaystyle DR}= downside deviation or (commonly known in the financial community) "downside risk" (by extension,DR2{\displaystyle DR^{2}}= downside variance),
T{\displaystyle T}= the annual target return, originally termed the minimum acceptable returnMAR,
r{\displaystyle r}= the random variable representing the return for the distribution of annual returnsf(r){\displaystyle f(r)}, and
f(r){\displaystyle f(r)}= thedistributionfor the annual returns, e.g., thelog-normal distribution.
For the reasons provided below, thiscontinuousformula is preferred over a simplerdiscreteversion that determines the standard deviation of below-target periodic returns taken from the return series.
"Before we make an investment, we don't know what the outcome will be... After the investment is made, and we want to measure its performance, all we know is what the outcome was, not what it could have been. To cope with this uncertainty, we assume that a reasonable estimate of the range of possible returns, as well as the probabilities associated with estimation of those returns...In statistical terms, the shape of [this] uncertainty is called a probability distribution. In other words, looking at just the discrete monthly or annual values does not tell the whole story."
Using the observed points to create a distribution is a staple of conventional performance measurement. For example, monthly returns are used to calculate a fund's mean and standard deviation. Using these values and the properties of the normal distribution, we can make statements such as the likelihood of losing money (even though no negative returns may actually have been observed) or the range within which two-thirds of all returns lies (even though the specific returns identifying this range have not necessarily occurred). Our ability to make these statements comes from the process of assuming the continuous form of the normal distribution and certain of its well-known properties.
Inpost-modern portfolio theoryan analogous process is followed.
As a caveat, some practitioners have fallen into the habit of using discrete periodic returns to compute downside risk. This method is conceptually and operationally incorrect and negates the foundational statistic of post-modern portfolio theory as developed by Brian M. Rom and Frank A. Sortino.
The Sortino ratio is used to score a portfolio's risk-adjusted returns relative to an investment target using downside risk. This is analogous to the Sharpe ratio, which scores risk-adjusted returns relative to the risk-free rate using standard deviation. When return distributions are near symmetrical and the target return is close to the distribution median, these two measure will produce similar results. As skewness increases and targets vary from the median, results can be expected to show dramatic differences.
The Sortino ratio can also be used in trading. For example, whenever you want to get a performance metric for your trading strategy in an asset, you can compute the Sortino ratio to compare your strategy performance with any other strategy.[3]
Practitioners who use a lower partial Standard Deviation (LPSD) instead of a standard deviation also tend to use the Sortino ratio instead of the Sharpe ratio.[4]
|
https://en.wikipedia.org/wiki/Sortino_ratio
|
Inpropositional logicandBoolean algebra, there is aduality betweenconjunctionanddisjunction,[1][2][3]also called theduality principle.[4][5][6]It is the most widely known example of duality in logic.[1]The duality consists in thesemetalogicaltheorems:
The connectives may be defined in terms of each other as follows:
Since theDisjunctive Normal Form Theoremshows that the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\vee ,\neg \}}isfunctionally complete, these results show that the sets of connectives{∧,¬}{\displaystyle \{\land ,\neg \}}and{∨,¬}{\displaystyle \{\vee ,\neg \}}are themselves functionally complete as well.
De Morgan's lawsalso follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it.[1]
Thedualof a sentence is what you get by swapping all occurrences of∨{\textstyle \vee }and∧{\textstyle \land }, while also negating all propositional constants. For example, the dual of(A∧B∨C){\textstyle (A\land B\vee C)}would be(¬A∨¬B∧¬C){\textstyle (\neg A\vee \neg B\land \neg C)}. The dual of a formulaφ{\textstyle \varphi }is notated asφ∗{\textstyle \varphi ^{*}}. TheDuality Principlestates that in classical propositional logic, any sentence is equivalent to the negation of its dual.[4][7]
Assumeφ⊨ψ{\displaystyle \varphi \models \psi }. Thenφ¯⊨ψ¯{\displaystyle {\overline {\varphi }}\models {\overline {\psi }}}by uniform substitution of¬Pi{\displaystyle \neg P_{i}}forPi{\displaystyle P_{i}}. Hence,¬ψ⊨¬φ{\displaystyle \neg \psi \models \neg \varphi },by contraposition; so finally,ψD⊨φD{\displaystyle \psi ^{D}\models \varphi ^{D}}, by the property thatφD{\displaystyle \varphi ^{D}}⟚¬φ¯{\displaystyle \neg {\overline {\varphi }}}, which was just proved above.[7]And sinceφDD=φ{\displaystyle \varphi ^{DD}=\varphi }, it is also true thatφ⊨ψ{\displaystyle \varphi \models \psi }if, and only if,ψD⊨φD{\displaystyle \psi ^{D}\models \varphi ^{D}}.[7]And it follows, as a corollary, that ifφ⊨¬ψ{\displaystyle \varphi \models \neg \psi }, thenφD⊨¬ψD{\displaystyle \varphi ^{D}\models \neg \psi ^{D}}.[7]
For a formulaφ{\displaystyle \varphi }indisjunctive normal form, the formulaφ¯D{\displaystyle {\overline {\varphi }}^{D}}will be inconjunctive normal form, and given the result that§ Negation is semantically equivalent to dual, it will be semantically equivalent to¬φ{\displaystyle \neg \varphi }.[8][9]This provides a procedure for converting between conjunctive normal form and disjunctive normal form.[10]Since theDisjunctive Normal Form Theoremshows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual.[9]
[11][12]
|
https://en.wikipedia.org/wiki/Conjunction/disjunction_duality
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.