text
stringlengths
11
320k
source
stringlengths
26
161
Nanostrainwas an EU-funded project (EMRP IND54) to characterisepiezoelectricmaterials for future fast digital switch designs.[1][2][3] The switching may only need a much lower voltage and be faster with lower power consumption thanCMOS.[4] Calculations suggest that smallPiezoElectronic Transistors(combiningpiezoelectricandpiezoresistivematerials) could need much less energy to switch and allow clock speeds of 30 GHz (10 times current CMOS), with a hundred times less power than today’s devices.[5] The consortium includes many European national institutes and industrial partners, including IBM.[6][7] Nanostrain was initially funded for 3 years, and included 6 work packages. Some results were reported in 2014.[6] A final report was published in July 2017, work continues in the EMPIR ADVENT project.[8] Thiselectromagnetism-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Nanostrain
S-PULSEis the acronym of Shrink-Path of Ultra-Low Power Superconducting Electronics. S-PULSE is a support action of the EuropeanSeventh Framework Programme(FP7) that stimulates joint efforts of European academic and industrial groups in the field of superconducting technologies. The general goal is to prepare Superconductor Electronics (SE) technologies for the technology generation beyond theCMOSscaling limits (called often “beyond CMOS”). S-PULSE supports the Superconducting Electronics community to strengthen the vital link between research and development and industry. It also strengthens the exchange of knowledge and ideas and take charge of education. The challenge in SE is to achieve superconducting electronic circuit performance beyond the possibilities of semiconductor circuit technologies, and to make SE technologies ready to benefit to other technologies in the world markets. This support action, developed in the 2008-2010 period, is focused to prepare a Technology Roadmap and a Strategic Research Agenda (SRA) to enable the transition from the present scientific oriented network for SE towards an industrially guided European Technology Platform (ETP). This electronics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/S-PULSE
Probabilistic complementary metal-oxide semiconductor(PCMOS) is asemiconductormanufacturing technology invented by Pr.Krishna PalemofRice Universityand Director of NTU'sInstitute for Sustainable Nanoelectronics(ISNE). The technology hopes to compete against currentCMOStechnology. Proponents claim it uses one thirtieth as much electricity while running seven times faster than the current fastest technology.[1][2][3] PCMOS-basedsystem on a chiparchitectures were shown to be gains that are as high as a substantial multiplicative factor of 560 when compared to a competing energy-efficientCMOSbased realization on applications based onprobabilistic algorithmssuch ashyper-encryption,bayesian networks,random neural networksandprobabilistic cellular automata.[4] This electronics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/PCMOS
In economics, theJevons paradox(/ˈdʒɛvənz/; sometimesJevons effect) occurs whentechnological advancementsmake aresourcemoreefficientto use (thereby reducing the amount needed for a single application); however, as the cost of using the resource drops, if the price is highlyelastic, this results in overalldemand increasing, causing total resource consumption to rise.[1][2][3][4]Governments have typically expected efficiency gains to lowerresource consumption, rather than anticipating possible increases due to the Jevons paradox.[5] In 1865, the English economistWilliam Stanley Jevonsobserved that technological improvements that increased the efficiency of coal use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption.[6][7] The issue has been re-examined by modern economists studying consumptionrebound effectsfrom improvedenergy efficiency. In addition to reducing the amount needed for a given use, improved efficiency also lowers the relative cost of using a resource, which increases the quantity demanded. This may counteract (to some extent) the reduction in use from improved efficiency. Additionally, improved efficiency increases real incomes and accelerateseconomic growth, further increasing the demand for resources. The Jevons paradox occurs when the effect from increased demand predominates, and the improved efficiency results in a faster rate of resource utilization.[7] Considerable debate exists about the size of the rebound in energy efficiency and the relevance of the Jevons paradox toenergy conservation. Some dismiss the effect, while others worry that it may be self-defeating to pursuesustainabilityby increasing energy efficiency.[5]Some environmental economists have proposed that efficiency gains be coupled with conservation policies that keep the cost of use the same (or higher) to avoid the Jevons paradox.[8]Conservation policies that increase cost of use (such ascap and tradeorgreen taxes) can be used to control the rebound effect.[9] The Jevons paradox was first described by the English economistWilliam Stanley Jevonsin his 1865 bookThe Coal Question. Jevons observed thatEngland's consumption ofcoalsoared afterJames Wattintroduced theWatt steam engine, which greatly improved the efficiency of the coal-firedsteam enginefromThomas Newcomen's earlier design. Watt's innovations made coal a more cost-effective power source, leading to the increased use of the steam engine in a wide range of industries. This in turn increased total coal consumption, even as the amount of coal required for any particular application fell. Jevons argued that improvements infuel efficiencytend to increase (rather than decrease) fuel use, writing: "It is a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption. The very contrary is the truth."[6] At that time, many in Britain worried that coal reserves were rapidly dwindling, but some experts opined that improving technology would reduce coal consumption. Jevons argued that this view was incorrect, as further increases in efficiency would tend to increase the use of coal. Hence, improving technology would tend to increase the rate at which England's coal deposits were being depleted, and could not be relied upon to solve the problem.[6][7] Although Jevons originally focused on coal, the concept has since been extended to other resources, e.g.,water usage.[10]The Jevons paradox is also found insocio-hydrology, in the safe development paradox called thereservoir effect, where construction of a reservoir to reduce the risk of water shortage can instead exacerbate that risk, as increased water availability leads to more development and hence more water consumption.[11] Economists have observed that consumers tend to travel more when their cars are more fuel efficient, causing a 'rebound' in thedemandfor fuel.[12]An increase in the efficiency with which a resource (e.g., fuel) is used causes a decrease in thecostof using that resource when measured in terms of what it can achieve (e.g., travel). Generally speaking, a decrease in the cost (or price) of agood or servicewill increase the quantity demanded (thelaw of demand). With a lower cost for travel, consumers will travel more, increasing the demand for fuel. This increase in demand is known as therebound effect, and it may or may not be large enough to offset the original drop in fuel use from the increased efficiency. The Jevons paradox occurs when the rebound effect is greater than 100%, exceeding the original efficiency gains.[7] The size of the direct rebound effect is dependent on theprice elasticity of demandfor the good.[13]In a perfectly competitive market where fuel is the sole input used, if the price of fuel remains constant but efficiency is doubled, the effective price of travel would be halved (twice as much travel can be purchased). If in response, the amount of travel purchased more than doubles (i.e., demand isprice elastic), then fuel consumption would increase, and the Jevons paradox would occur. If demand is price inelastic, the amount of travel purchased would less than double, and fuel consumption would decrease. However, goods and services generally use more than one type of input (e.g. fuel, labour, machinery), and other factors besides input cost may also affect price. These factors tend to reduce the rebound effect, making the Jevons paradox less likely to occur.[7] As an example of where the paradox did not occur, large improvements in farming productivity (including theThird Agricultural Revolution) led to lower food prices but did not result in increased demand for food. (Demand for food is inelastic.) This instead led to lower employment in the farming sector, which declined from 40% of Americans in 1900 to less than 2% in 2024.[14] The following conditions are necessary for a Jevons paradox to occur:[14] In the 1980s, economists Daniel Khazzoom and Leonard Brookes revisited the Jevons paradox for the case of society'senergy use. Brookes, then chief economist at theUK Atomic Energy Authority, argued that attempts to reduce energy consumption by increasingenergy efficiencywould simply raise demand for energy in the economy as a whole. Khazzoom focused on the narrower point that the potential for rebound was ignored in mandatory performance standards for domestic appliances being set by theCalifornia Energy Commission.[15][16] In 1992, the economist Harry Saunders dubbed the hypothesis that improvements inenergy efficiencywork to increase (rather than decrease) energy consumption theKhazzoom–Brookes postulate, and argued that the hypothesis is broadly supported by neoclassicalgrowth theory(the mainstream economic theory ofcapital accumulation,technological progressandlong-runeconomic growth). Saunders showed that the Khazzoom–Brookes postulate occurs in theneoclassical growth modelunder a wide range of assumptions.[15][17] According to Saunders, increasedenergy efficiencytends to increase energy consumption by two means. First, increased energy efficiency makes the use of energy relatively cheaper, thus encouraging increased use (the direct rebound effect). Second, increased energy efficiency increases real incomes and leads to increased economic growth, which pulls up energy use for the whole economy. At themicroeconomiclevel (looking at an individual market), even with the rebound effect, improvements in energy efficiency usually result in reduced energy consumption.[18]That is, the rebound effect is usually less than 100%. However, at themacroeconomiclevel, more efficient (and hence comparatively cheaper) energy leads to faster economic growth, which increases energy use throughout the economy. Saunders argued that taking into account both microeconomic and macroeconomic effects, the technological progress that improves energy efficiency will tend to increase overall energy use.[15] Jevons warned that fuel efficiency gains tend to increase fuel use. However, this does not imply that improved fuel efficiency is worthless if the Jevons paradox occurs; higher fuel efficiency enables greater production and a higher materialquality of life.[19]For example, a more efficient steam engine allowed the cheaper transport of goods and people that contributed to theIndustrial Revolution. Nonetheless, if the Khazzoom–Brookes postulate is correct, increased fuel efficiency, by itself, will not reduce the rate of depletion offossil fuels.[15] There is considerable debate about whether the Khazzoom-Brookes Postulate is correct, and of the relevance of the Jevons paradox toenergy conservationpolicy. Most governments, environmentalists and NGOs pursue policies that improve efficiency, holding that these policies will lower resource consumption and reduce environmental problems. Others, including manyenvironmental economists, doubt this 'efficiency strategy' towardssustainability, and worry that efficiency gains may in fact lead to higher production and consumption. They hold that for resource use to fall, efficiency gains should be coupled with other policies that limit resource use.[5][17][20]However, other environmental economists argue that, while the Jevons paradox may occur in some situations, the empirical evidence for its widespread applicability is limited.[21] The Jevons paradox is sometimes used to argue thatenergy conservationefforts are futile, for example, that more efficient use of oil will lead to increased demand, and will not slow the arrival or the effects ofpeak oil. This argument is usually presented as a reason not to enact environmental policies or pursue fuel efficiency (e.g., if cars are more efficient, it will simply lead to more driving).[22][23]Several points have been raised against this argument. First, in the context of a mature market such as for oil in developed countries, the direct rebound effect is usually small, and so increased fuel efficiency usually reduces resource use, other conditions remaining constant.[12][18][24]Second, even if increased efficiency does not reduce the total amount of fuel used, there remain other benefits associated with improved efficiency. For example, increased fuel efficiency may mitigate the price increases, shortages and disruptions in the global economy associated with crude oil depletion.[25]Third, environmental economists have pointed out that fuel use will unambiguously decrease if increased efficiency is coupled with an intervention (e.g., afuel tax) that keeps the cost of fuel use the same or higher.[8] The Jevons paradox indicates that increased efficiency by itself may not reduce fuel use, and thatsustainable energypolicy must rely on other types of government interventions as well.[9]As the imposition of conservation standards or other government interventions that increase cost-of-use do not display the Jevons paradox, they can be used to control the rebound effect.[9]To ensure that efficiency-enhancing technological improvements reduce fuel use, efficiency gains can be paired with government intervention that reduces demand (e.g.,green taxes,cap and trade, or higheremissions standards). Theecological economistsMathis WackernagelandWilliam Reeshave suggested that any cost savings from efficiency gains be "taxed away or otherwise removed from further economic circulation. Preferably they should be captured for reinvestment innatural capitalrehabilitation."[8]By mitigating the economic effects of government interventions designed to promote ecologically sustainable activities, efficiency-improving technological progress may make the imposition of these interventions more palatable, and more likely to be implemented.[26][27][28] Increasing theyieldof a crop, such as wheat, for a given area will reduce the area required to achieve the same total yield. However, increasing efficiency may make it more profitable to grow wheat and lead farmers to convert land to the production of wheat, thereby increasing land use instead.[29] Microsoft CEOSatya Nadellahas referenced the Jevons paradox when describing artificial intelligence.[30]Erik Brynjolfssonstated that he believes there will be some occupations for which the three conditions for the paradox will be met, thereby causing increased employment in those fields, such asradiologists, translators, andcoders.[14]
https://en.wikipedia.org/wiki/Jevons_paradox
"No Silver Bullet—Essence and Accident in Software Engineering" is a widely discussed paper onsoftware engineeringwritten byTuring AwardwinnerFred Brooksin 1986.[1]Brooks argues that "there is no single development, in either technology or management technique, which by itself promises even oneorder of magnitude[tenfold] improvement within a decade in productivity, in reliability, in simplicity." He also states that "we cannot expect ever to see two-fold gains every two years" in software development, as there is in hardware development (Moore's law). Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. This is related toAristotle'sclassification. Accidental complexity relates to problems that engineers create and can fix. For example, modernprogramming languageshave abstracted away the details of writing and optimizingassembly languagesource codeand eliminated the delays caused bybatch processing, though other sources of accidental complexity remain. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things. Brooks claims that accidental complexity has decreased substantially, and today's programmers spend most of their time addressing essential complexity. Brooks argues that this means shrinking all the accidental activities to zero will not give the same order-of-magnitude improvement as attempting to decrease essential complexity. While Brooks insists that there is no onesilver bullet, he believes that a series of innovations attacking essential complexity could lead to significant improvements. One technology that had made significant improvement in the area of accidental complexity was the invention ofhigh-level programming languages, such asAda.[1] Brooks advocates "growing" software organically through incremental development. He suggests devising and implementing the main and subprograms right at the beginning, filling in the working sub-sections later. He believes thatcomputer programmingthis way excites the engineers and provides a working system at every stage of development. Brooks goes on to argue that there is a difference between "good" designers and "great" designers. He postulates that as programming is a creative process, some designers are inherently better than others. He suggests that there is as much as a tenfold difference between an ordinary designer and a great one. He then advocates treating star designers equally well as star managers, providing them not just with equalremuneration, but also all the perks of higher status: large office, staff, travel funds, etc. The article, and Brooks's later reflections on it, "'No Silver Bullet' Refired", can be found in the anniversary edition ofThe Mythical Man-Month.[2] Brooks's paper has sometimes been cited in connection withWirth's law, to argue that "software systems grow faster in size and complexity than methods to handle complexity are invented."[3]
https://en.wikipedia.org/wiki/Accidental_complexity
Theattention economyrefers to the incentives of advertising-driven companies, in particular, to maximize the time and attention their users give to their product.[1][2] Attention economicsis an approach to themanagement of informationthat treats humanattentionas a scarcecommodityand applieseconomic theoryto solve various information management problems. According toMatthew Crawford, "Attention is aresource—a person has only so much of it."[3]Thomas H. DavenportandJohn C. Beck[4]add to that definition: Attention is focused mental engagement on a particular item of information. Items come into our awareness, we attend to a particular item, and then we decide whether to act.[5] A strong trigger of this effect is that it limits the mental capability of humans and the receptiveness of information is also limited. Attention allows information to be filtered such that the most important information can be extracted from the environment while irrelevant details can be left out.[6] Software applicationseither explicitly or implicitly take attention economy into consideration in theiruser interface designbased on the realization that if it takes the user too long to locate something, they will find it through another application. This is done, for instance, by creating filters to make sure viewers are presented with information that is most relevant, of interest, and personalized based on past web search history.[7] The economic value of time can be quantified and compared to monetary expenditures. Erik Brynjolfsson, Seon Tae Kim and Joo Hee Oh show that this makes it possible to formally analyze the attention economy and putting values on free goods.[8] Research from a wide range of disciplines including psychology,[9]cognitive science,[10]neuroscience,[11]and economics,[12]suggest that humans have limited cognitive resources that can be used at any given time, when resources are allocated to one task, the resources available for other tasks will be limited. Given that attention is a cognitive process that involves the selective concentration of resources on a given item of information, to the exclusion of other perceivable information, attention can be considered in terms of limited processing resources.[13] The concept of attention economics was first theorized by psychologist and economistHerbert A. Simon[14]when he wrote about the scarcity of attention in an information-rich world in 1971: [I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.[15] He noted that many designers of information systems incorrectly represented their design problem as information scarcity rather than attention scarcity, and as a result, they built systems that excelled at providing more and more information to people, when what was really needed were systems that excelled at filtering out unimportant or irrelevant information.[16] Simon's characterization of the problem ofinformation overloadas an economic one has become increasingly popular in analyzing information consumption since the mid-1990s, when writers such asThomas H. Davenportand Michael Goldhaber[17]adopted terms like "attention economy" and "economics of attention".[18] Some writers have speculated that transactions based on attention will replace financial transactions as the focus of economic system. For example, Goldhaber wrote in 1997: "...transactions in which money is involved may be growing in total number, but the total number of global attention transactions is growing even faster."[19]For a 1999 essay,Georg Franckargued "income in attention ranks above financial success" for advertising-based media like magazines and television.[20]Information systems researchers have also adopted the idea, and are beginning to investigatemechanism designswhich build on the idea of creating property rights in attention (seeApplications). In 2022,Rice Universityprofessor Adrian Lenardic and two co-authors wrote forBigThinkthat attention economics adversely affected scientific research: "The attention a scientist’s work gains from the public now plays into its perceived value. Scientists list media exposure counts on résumés, and many PhD theses now include the number of times a candidate’s work has appeared in the popular science press. Science has succumbed to the attention economy."[21]They add that study results are publicized without proper peer input or reproducibility.[21] Ineconomic theory, market exchanges may have unintended consequences, calledexternalities, that aren't reflected in the price consumers pay upfront. When these consequences have a negative effect on an uninvolved third party, they're callednegative externalities, withpollutionbeing a common example.[22]The attention economy generates negative externalities for society that impact both individuals and communities.[23] One negative externality of the attention economy issocial media addiction. Given the monetization of human attention, social media platforms are designed to maximize user engagement, namely by influencing the brain's reward system. When users receive positive feedback on social media or view novel content, their brain releasesdopamine, leading them to stay on the platform for extended periods of time and come back to it repeatedly. Social media addiction has been linked to negative mental health outcomes such as depression, anxiety, and low self-esteem.[24] TheNetflixdocumentaryThe Social Dilemmaillustrates how algorithms from search engines and social media platforms negatively affect users while maximizing onlineengagement.[25][26] During the 2010s,social mediain conjunction withonline advertisingtechnologies inspired significant growth in thebusiness modelof the attention economy.[27][28]A study conducted by researchers atHanken School of Economicsfound that when the attention economy is paired with online advertising, the resulting financial arrangement can lead to the circulation offake newsand the amplification ofdisinformationfor profit.[28] Another negative externality of the attention economy is the rise ofsurveillance capitalism, which describes the practice of companies collecting personal data to buy and sell for profit. To capture user attention, companies collect data — such as demographics and behavioral patterns — and use it to create personalized user experiences that align with their interests based on the obtained data. Companies also sell this data to third parties, often without the user's informed consent.[29]These practices raise ethical concerns about privacy, misuse of data, and misrepresentation of communities.[30] Within the attention economy, engagement metrics influence the visibility of content and narratives. Algorithms in the attention economy are designed to maximize engagement, often prioritizing content that resonates with dominant cultural identities. As a result, marginalized groups may face challenges in having representation of their perspectives and concerns. For example, Black creators on platforms such asTikTokhave reported that their content had significant reductions in engagement after posting about theBlack Lives MatterMovement, suggesting that they wereshadow banned.[31]Furthermore, limiting the visibility of marginalized creators reduces the amount of attention they receive. This, in turn, hinders their ability to engage in activism and spread awareness about issues affecting their community to the broader public.[32][33] According todigital cultureexpertKevin Kelly, by 2008, the attention economy was increasingly one where the consumer product costs virtually nothing to reproduce and the problem facing the supplier of the product lies in adding valuable intangibles that cannot be reproduced at any cost. He identifies these intangibles as:[34] Attention economics is also relevant to the social sphere. Specifically, long-term attention can be considered according to the attention that people dedicate to managing their interactions with others. Dedicating too much attention to these interactions can lead to "social interaction overload",[35]i.e. when people are overwhelmed in managing their relationships with others, for instance in the context ofsocial network servicesin which people are the subject of a high level of social solicitations. Digital media and the internet facilitate participation in this economy by creating new channels for distributing attention. Ordinary people are now empowered to reach a wide audience by publishing their own content and commenting on the content of others.[36] Social attention can also be associated to collective attention, i.e. how "attention to novel items propagates and eventually fades among large populations".[37] "Attention economics" treats a potential consumer's attention as a resource.[38]Traditional media advertisers followed a model that suggested consumers went through a linear process they calledAIDA(attention, interest, desire and action).[39]Attention is therefore a major and the first stage in the process of converting non-consumers. Since the cost to transmit advertising to consumers has become sufficiently low given that more ads can be transmitted to a consumer (e.g. via online advertising) than the consumer can process, the consumer's attention becomes the scarce resource to be allocated. As such, a superfluidity of information may hinder an individual's decision-making who keeps searching and comparing products as long as it promises to provide more than it is using up.[40] Advertisers that produce attention-grabbing content that is presented to unconsenting consumers without compensation have been criticized for perpetratingattention theft.[41][42] One application treats various forms of information (e.g. spam, advertising) as a form of pollution or 'detrimental externality'.[43]In economics, anexternalityis a by-product of a production process that imposes burdens (or supplies benefits), to parties other than the intended consumer of a commodity.[44]For example; air and water pollution are ‘negative’ externalities that impose burdens on society and the environment. A market-based approach to controlling externalities was outlined inRonald Coase'sThe Problem of Social Cost(1960).[45]This evolved from an article on theFederal Communications Commission(1959),[46]in which Coase claimed thatradio-frequency interferenceis a negative externality that could be controlled by the creation ofproperty rights. Coase's approach to the management of externalities requires the careful specification of property rights and a set of rules for the initial allocation of the rights.[47]Once these rights are specified and allocated, amarket mechanismcan theoretically manage the externality problem.[48] Sending huge numbers of e-mail messages costs spammers very little, since the costs of e-mail messages are spread out over theinternet service providersthat distribute them (and the recipients who must spend attention dealing with them).[49]Thus, sending out as much spam as possible is a rational strategy: even if only 0.001% of recipients (1 in 100,000) is converted into a sale, a spam campaign can be profitable. Of course, it is very difficult to understand where all the revenue comes from since these businesses are run through proxy servers. However, if they were not profitable, it is reasonable to conclude that they would not be sending spam.[50]Spammers are demanding valuable attention from potential customers, but avoid paying a fair price for this attention due to the current architecture of e-mail systems.[51] One way this might be mitigated is through the implementation of "Sender Bond" whereby senders are required to post a financial bond that is forfeited if enough recipients report an email as spam.[52] Closely related is the idea of selling "interrupt rights", or small fees for the right to demand one's attention.[53]The cost of these rights could vary according to the person who is interrupted: interrupt rights for the CEO of a Fortune 500 company would presumably be extraordinarily expensive, while those of a high school student might be lower. Costs could also vary for an individual depending on context, perhaps rising during the busy holiday season and falling during the dog days of summer. Those who are interrupted could decline to collect their fees from friends, family, and other welcome interrupters.[54] Another idea in this vein is the creation of "attention bonds", small warranties that some information will not be a waste of the recipient's time, placed intoescrowat the time of sending.[55]Like the granters of interrupt rights, receivers could cash in their bonds to signal to the sender that a given communication was a waste of their time or elect not to cash them in to signal that more communication would be welcome.[56] Assearch engineshave become a primary means for finding and accessing information on the web, high rankings in the results for certain queries have become valuable commodities, due to the ability of search engines to focus searchers' attention.[57]Like other information systems, web search is vulnerable to pollution: "Because the Web environment contains profit seeking ventures, attention getting strategies evolve in response to search engine algorithms".[58] Since most major search engines now rely on some form ofPageRank(recursive counting ofhyperlinksto a site) to determine search result rankings, a gray market in the creation and trading of hyperlinks has emerged.[59][60]Participants in this market engage in a variety of practices known aslink spamming,link farming, andreciprocal linking.[61] Another issue, similar to the issue discussed above of whether or not to consider political e-mail campaigns as spam, is what to do about politically motivatedlink campaignsorGoogle bombs.[62]Currently, the major search engines do not treat these as web spam, but this is a decision made unilaterally by private companies. The paid inclusion model, as well as more pervasive advertising networks likeYahoo! Publisher Networkand Google'sAdSense, work by treating consumer attention as the property of the search engine (in the case of paid inclusion) or the publisher (in the case of advertising networks).[63][64]This is somewhat different from the anti-spam uses of property rights in attention, which treat an individual's attention as his or her own property. These advertising models significantly influence consumer behavior, often leveraging personal data to target ads more effectively. While this can enhance user experience by aligning advertisements with user interests, it raises privacy concerns and can lead to consumer manipulation. The phenomenon of "ad fatigue" where excessive exposure to ads leads to reduced attention and engagement with advertisements is also noteworthy.[65] Advancements in artificial intelligence and machine learning have transformed paid inclusion and advertising networks. These technologies allow for more sophisticated targeting and personalization of ads, improving effectiveness but also increasing concerns about surveillance and data privacy.[66] The regulation of paid inclusion and advertising networks is complex, involving multiple stakeholders with diverse interests. There is an ongoing debate about the balance between encouraging innovation and protecting consumer privacy. Ethical considerations also include the transparency of these models and their impact on the informational ecosystem, potentially leading to biased or manipulated content.[67]
https://en.wikipedia.org/wiki/Attention_economy
Theglobal brainis a neuroscience-inspired and futurological vision of the planetaryinformation and communications technologynetworkthat interconnects allhumansand their technological artifacts.[1]As this network stores ever moreinformation, takes over ever more functions of coordination and communication from traditional organizations, and becomes increasinglyintelligent, it increasingly plays the role of abrainfor the planetEarth. In thephilosophy of mind, global brain finds an analog inAverroes'stheory of the unity of the intellect. Proponents of the global brain hypothesis claim that theInternetincreasingly ties its users together into a single information processing system that functions as part of the collectivenervous systemof the planet. The intelligence of this network iscollectiveordistributed: it is not centralized or localized in any particular individual, organization or computer system. Therefore, no one can command or control it. Rather, itself-organizesoremergesfrom thedynamic networksofinteractionsbetween its components. This is a property typical ofcomplex adaptive systems.[2] TheWorld Wide Webin particular resembles the organization of a brain with itsweb pages(playing a role similar toneurons) connected byhyperlinks(playing a role similar tosynapses), together forming anassociativenetwork along which information propagates.[3]This analogy becomes stronger with the rise ofsocial media, such asFacebook, where links between personal pages represent relationships in asocial networkalong which information propagates from person to person.[4]Such propagation is similar to thespreading activationthatneural networksin the brain use to process information in a parallel, distributed manner. Although some of the underlying ideas were already expressed byNikola Teslain the late 19th century and were written about by many others before him, the term "global brain" was coined in 1982 by Peter Russell in his bookThe Global Brain.[5]How the Internet might be developed to achieve this was set out in 1986.[6]The first peer-reviewed article on the subject was published byGottfried Mayer-Kressin 1995,[7]while the firstalgorithmsthat could turn the world-wide web into a collectively intelligent network were proposed byFrancis HeylighenandJohan Bollenin 1996.[3][8] Reviewing the strands of intellectual history that contributed to the global brain hypothesis,Francis Heylighendistinguishes four perspectives:organicism,encyclopedism,emergentismandevolutionary cybernetics. He asserts that these developed in relative independence but now are converging in his own scientific re-formulation.[9] In the 19th century, the sociologistHerbert Spencersaw society as asocial organismand reflected about its need for a nervous system. EntomologistWilliam Wheelerdeveloped the concept of the ant colony as a spatially extended organism, and in the 1930s he coined the termsuperorganismto describe such an entity.[10]This concept was later adopted by thinkers such asJoël de Rosnayin the bookLe Cerveau Planétaire(1986) andGregory Stockin the bookMetaman(1993) to describe planetary society as a superorganism. The mental aspects of such an organic system at the planetary level were perhaps first broadly elaborated by palaeontologist and Jesuit priestPierre Teilhard de Chardin. In 1945, he described a coming "planetisation" of humanity, which he saw as the next phase of accelerating human "socialisation". Teilhard described both socialization and planetization as irreversible, irresistible processes ofmacrobiological developmentculminating in the emergence of anoosphere, or global mind (see Emergentism below).[11] The more recentliving systems theorydescribes both organisms and social systems in terms of the "critical subsystems" ("organs") they need to contain in order to survive, such as an internal transport system, a resource reserve, and a decision-making system. This theory has inspired several thinkers, including Peter Russell and Francis Heylighen to define the global brain as the network of information processing subsystems for the planetary social system. In the perspective of encyclopedism, the emphasis is on developing a universal knowledge network. The first systematic attempt to create such an integrated system of the world's knowledge was the 18th centuryEncyclopédieofDenis DiderotandJean le Rond d'Alembert. However, by the end of the 19th century, the amount of knowledge had become too large to be published in a single synthetic volume. To tackle this problem,Paul Otletfounded the science of documentation, now calledinformation science. In the 1930s he envisaged aWorld Wide Web-like system of associations between documents and telecommunication links that would make all the world's knowledge available immediately to anybody.H. G. Wellsproposed a similar vision of a collaboratively developed world encyclopedia that would be constantly updated by a global university-like institution. He called this aWorld Brain,[12]as it would function as a continuously updated memory for the planet, although the image of humanity acting informally as a more organic global brain is a recurring motif in many of his other works.[13] Tim Berners-Lee, the inventor of theWorld Wide Web, too, was inspired by the free-associative possibilities of the brain for his invention. The brain can link different kinds of information without any apparent link otherwise; Berners-Lee thought that computers could become much more powerful if they could imitate this functioning, i.e. make links between any arbitrary piece of information.[14]The most powerful implementation of encyclopedism to date isWikipedia, which integrates the associative powers of the world-wide-web with the collective intelligence of its millions of contributors, approaching the ideal of a global memory.[9]TheSemantic web, also first proposed by Berners-Lee, is a system of protocols to make the pieces of knowledge and their links readable by machines, so that they could be used to make automaticinferences, thus providing this brain-like network with some capacity for autonomous "thinking" or reflection. This approach focuses on the emergent aspects of the evolution and development ofcomplexity, including the spiritual, psychological, and moral-ethical aspects of the global brain, and is at present the most speculative approach. The global brain is here seen as a natural and emergent process of planetary evolutionary development. Here againPierre Teilhard de Chardinattempted a synthesis of science, social values, and religion in hisThe Phenomenon of Man, which argues that thetelos(drive, purpose) of universal evolutionary process is the development of greater levels of both complexity and consciousness. Teilhard proposed that if life persists then planetization, as a biological process producing a global brain, would necessarily also produce a global mind, a new level of planetary consciousness and a technologically supported network of thoughts which he called thenoosphere. Teilhard's proposed technological layer for the noosphere can be interpreted as an early anticipation of the Internet and the Web.[15] Systems theoristsandcyberneticianscommonly describe the emergence of a higher order system in evolutionary development as a "metasystem transition" (a concept introduced byValentin Turchin) or a "major evolutionary transition".[16]Such a metasystem consists of a group of subsystems that work together in a coordinated, goal-directed manner. It is as such much more powerful and intelligent than its constituent systems.Francis Heylighenhas argued that the global brain is an emerging metasystem with respect to the level of individual human intelligence, and investigated the specific evolutionary mechanisms that promote this transition.[17] In this scenario, the Internet fulfils the role of the network of "nerves" that interconnect the subsystems and thus coordinates their activity. The cybernetic approach makes it possible to develop mathematical models and simulations of the processes ofself-organizationthrough which such coordination andcollective intelligenceemerges. In 1994Kevin Kelly, in his popular bookOut of Control, posited the emergence of a "hive mind" from a discussion of cybernetics and evolutionary biology.[18] In 1996,Francis HeylighenandBen Goertzelfounded the Global Brain group, a discussion forum grouping most of the researchers that had been working on the subject of the global brain to further investigate this phenomenon. The group organized the first international conference on the topic in 2001 at theVrije Universiteit Brussel. After a period of relative neglect, the Global Brain idea has recently seen a resurgence in interest, in part due to talks given on the topic byTim O'Reilly, the Internet forecaster who popularized the termWeb 2.0,[19]andYuri Milner, the social media investor.[20]In January 2012, the Global Brain Institute (GBI) was founded at theVrije Universiteit Brusselto develop a mathematical theory of the "brainlike" propagation of information across the Internet. In the same year,Thomas W. Maloneand collaborators from theMIT Center for Collective Intelligencehave started to explore how the global brain could be "programmed" to work more effectively,[21]using mechanisms ofcollective intelligence. The complexity scientistDirk Helbingand his NervousNet group have recently started developing a "Planetary Nervous System", which includes a "Global Participatory Platform", as part of the large-scaleFuturICTproject, thus preparing some of the groundwork for a Global Brain.[22] In July 2017,Elon Muskfounded the companyNeuralink, which aims to create abrain-computer interface (BCI)with significantly greaterinformation bandwidththan traditionalhuman interface devices. Musk predicts thatartificial intelligence systemswill rapidly outpace human abilities in most domains and views them as an existential threat. He believes an advanced BCI would enable human cognition to remain relevant for longer. The firm raised $27m from 12 Investors in 2017.[23] A common criticism of the idea that humanity would become directed by a global brain is that this would reduce individual diversity and freedom,[24]and lead tomass surveillance.[25]This criticism is inspired bytotalitarianforms of government, as exemplified byGeorge Orwell's character of "Big Brother". It is also inspired by the analogy between collective intelligence orswarm intelligenceandinsect societies, such as beehives and ant colonies, in which individuals are essentially interchangeable. In a more extreme view, the global brain has been compared with theBorg,[26]a race of collectively thinking cyborgs conceived by theStar Trekscience fiction franchise. Global brain theorists reply that the emergence of distributed intelligence would lead to the exact opposite of this vision.[27][28]James Surowieckiin his bookThe Wisdom of Crowdsargued that the reason is that effectivecollective intelligencerequiresdiversityof opinion,decentralizationand individual independence. For more references, check theGBI bibliography:
https://en.wikipedia.org/wiki/Global_brain
Intelligence amplification(IA) (also referred to ascognitive augmentation,machine augmented intelligenceandenhanced intelligence) is the use ofinformation technologyin augmentinghuman intelligence. The idea was first proposed in the 1950s and 1960s bycyberneticsand earlycomputer pioneers. IA is sometimes contrasted with AI (artificial intelligence), that is, the project of building a human-like intelligence in the form of an autonomous technological system such as a computer or robot. AI has encountered many fundamental obstacles, practical as well as theoretical, which for IA seem moot, as it needs technology merely as an extra support for an autonomous intelligence that has already proven to function. Moreover, IA has a long history of success, since all forms of information technology, from the abacus to writing to the Internet, have been developed basically to extend theinformation processingcapabilities of the human mind (seeextended mindanddistributed cognition). The termintelligence amplification(IA) has enjoyed a wide currency sinceWilliam Ross Ashbywrote of "amplifying intelligence" in hisIntroduction to Cybernetics(1956). Related ideas were explicitly proposed as an alternative toArtificial IntelligencebyHao Wangfrom the early days ofautomatic theorem provers. ... "problem solving" is largely, perhaps entirely, a matter of appropriateselection. Take, for instance, any popular book of problems andpuzzles. Almost every one can be reduced to the form: out of a certain set, indicate one element. ... It is, in fact, difficult to think of a problem, either playful or serious, that does not ultimately require an appropriate selection as necessary and sufficient for its solution. It is also clear that many of thetests used for measuring "intelligence"are scored essentially according to the candidate's power of appropriate selection. ... Thus it is not impossible that what is commonly referred to as "intellectual power" may be equivalent to "power of appropriate selection". Indeed, if a talkingBlack Boxwere to show high power of appropriate selection in such matters—so that, when given difficult problems it persistently gave correct answers—we could hardly deny that it was showing the 'behavioral' equivalent of "high intelligence". If this is so, and as we know that power of selection can be amplified, it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done, for the gene-patterns do it every time they form a brain that grows up to be something better than the gene-pattern could have specified in detail. What is new is that we can now do it synthetically, consciously, deliberately. "Man-Computer Symbiosis" is a key speculative paper published in 1960 bypsychologist/computer scientistJ.C.R. Licklider, which envisions that mutually-interdependent, "living together", tightly-coupled human brains and computing machines would prove to complement each other's strengths to a high degree: Man-computersymbiosisis a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. In Licklider's vision, many of the pure artificial intelligence systems envisioned at the time by over-optimistic researchers would prove unnecessary. (This paper is also seen by some historians as marking the genesis of ideas aboutcomputer networkswhich later blossomed into theInternet). Licklider's research was similar in spirit to hisDARPAcontemporary andprotégéDouglas Engelbart. Both men’s work helped expand the utility of computers beyond merecomputationalmachines by conceiving and demonstrating them as a primary interface for humans to process and manipulate information.[1] Engelbart reasoned that the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. He thus set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes forknowledge-work. Engelbart's philosophy and research agenda is most clearly and directly expressed in the 1962 research report:Augmenting Human Intellect: A Conceptual Framework[2]The concept of network augmented intelligence is attributed to Engelbart based on this pioneering work. Increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable. And by complex situations we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers--whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human feel for a situation usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids. In the same research report he addresses the term "Intelligence Amplification" as coined by Ashby, and reflects on how his proposed research relates.[3] Engelbart subsequently implemented these concepts in hisAugmented Human Intellect Research CenteratSRI International, developing essentially an intelligence amplifying system of tools (NLS) and co-evolving organizational methods, in full operational use by the mid-1960s within the lab. As intended,[4]his R&D team experienced increasing degrees of intelligence amplification, as both rigorous users and rapid-prototype developers of the system. For a sampling of research results, see their 1968Mother of All Demos. Howard Rheingoldworked atXerox PARCin the 1980s and was introduced to bothBob TaylorandDouglas Engelbart; Rheingold wrote about "mind amplifiers" in his 1985 book,Tools for Thought.[5]Andrews Samraj mentioned in "Skin-Close Computing and Wearable Technology" 2021, about Human augmentation by two varieties of cyborgs, namely, Hard cyborgs and Soft cyborgs. A humanoid walking machine is an example of the soft cyborg and a pace-maker is an example for augmenting human as a hard cyborg. Arnav Kapurworking atMITwrote about human-AI coalescence: how AI can be integrated into human condition as part of "human self": as a tertiary layer to the human brain to augment human cognition.[6]He demonstrates this using a peripheral nerve-computer interface,AlterEgo, which enables a human user to silently and internally converse with a personal AI.[7][8] In 2014 the technology of Artificial Swarm Intelligence was developed to amplify the intelligence of networked human groups using AI algorithms modeled on biological swarms. The technology enables small teams to make predictions, estimations and medical diagnoses at accuracy levels that significantly exceed natural human intelligence.[9][10][11][12] Shan Carter andMichael Nielsenintroduce the concept of artificial intelligence augmentation (AIA): the use of AI systems to help develop new methods for intelligence augmentation. They contrast cognitive outsourcing (AI as an oracle, able to solve some large class of problems with better-than-human performance) with cognitive transformation (changing the operations and representations we use to think).[13]A calculator is an example of the former; a spreadsheet of the latter. Ron Fulbright describes human cognitive augmentation in human/cog ensembles involving humans working in collaborative partnership with cognitive systems (called cogs). By working together, human/cog ensembles achieve results superior to those obtained by the humans working alone or the cognitive systems working alone. The human component of the ensemble is therefore cognitively augmented. The degree of augmentation depends on the proportion of the total amount of cognition done by the human and that done by the cog. Six Levels of Cognitive Augmentation have been identified:[14][15] Augmented intelligence has been a repeating theme inscience fiction. A positive view ofbrain implantsused to communicate with a computer as a form of augmented intelligence is seen inAlgis Budrys1976 novelMichaelmas. Fear that the technology will be misused by the government and military is an early theme. In the 1981 BBC serialThe Nightmare Manthe pilot of a high-tech mini submarine is linked to his craft via a brain implant but becomes a savage killer after ripping out the implant. Perhaps the most well known writer exploring themes of intelligence augmentation isWilliam Gibson, in work such as his 1981 story "Johnny Mnemonic", in which the title character has computer-augmented memory, and his 1984 novelNeuromancer, in whichcomputer hackersinterface through brain-computer interfaces to computer systems.Vernor Vinge, as discussed earlier, looked at intelligence augmentation as a possible route to thetechnological singularity, a theme which also appears in his fiction. Flowers for Algernonis an early example of augmented intelligence in science fiction literature.[16]First published as a short story in 1959, the plot concerns anintellectually disabledman who undergoes an experiment to increase his intelligence to genius levels. His rise and fall is detailed in his journal entries, which become more sophisticated as his intelligence increases.
https://en.wikipedia.org/wiki/Intelligence_amplification
Miniaturization(Br.Eng.:miniaturisation) is the trend to manufacture ever-smaller mechanical, optical, and electronic products and devices. Examples include miniaturization ofmobile phones,computersand vehicleengine downsizing. Inelectronics, the exponentialscaling and miniaturizationofsiliconMOSFETs(MOS transistors)[1][2][3]leads to thenumber of transistorson anintegrated circuitchip doubling every two years,[4][5]an observation known asMoore's law.[6][7]This leads toMOS integrated circuitssuch asmicroprocessorsandmemory chipsbeing built with increasingtransistor density, faster performance, and lowerpower consumption, enabling the miniaturization ofelectronic devices.[8][3] The history of miniaturization is associated with thehistory of information technologybased on the succession of switching devices, each smaller, faster, and cheaper than its predecessor.[9]During the period referred to as theSecond Industrial Revolution(c.1870–1914), miniaturization was confined to two-dimensional electronic circuits used for the manipulation of information.[10]This orientation is demonstrated in the use of vacuum tubes in the first general-purpose computers. The technology gave way to the development oftransistorsin the 1950s and then theintegrated circuit(IC) approach which followed.[9] The MOSFET was invented at Bell Labs between 1955 and 1960.[11][12][13][14][15][16]It was the first truly compacttransistorthat could be miniaturized and mass-produced for a wide range of uses,[17]due to itshigh scalability[1]and lowpower consumption, leading to increasingtransistor density.[5]This made it possible to buildhigh-density IC chips,[18]with reduced cost-per-transistor as transistor density increased.[19] In the early 1960s,Gordon Moore, who later foundedIntel, recognized that the ideal electrical and scaling characteristics of MOSFET devices would lead to rapidly increasing integration levels and unparalleled growth inelectronicapplications.[20]Moore's law, which he described in 1965, and which was later named after him,[21]predicted that the number oftransistorson an IC for minimum component cost would double every 18 months.[contradictory][6][7]In 1974,Robert H. DennardatIBMrecognized the rapidMOSFET scalingtechnology and formulated the relatedDennard scalingrule.[22][23]Moore described the development of miniaturization during the 1975International Electron Devices Meeting, confirming his earlier predictions.[19] By 2004, electronics companies were producingsiliconIC chips with switching MOSFETs that had feature size as small as130 nanometers(nm) and development was also underway for chips afew nanometersin size through thenanotechnologyinitiative.[24]The focus is to make components smaller to increase the number that can be integrated into a single wafer and this required critical innovations, which include increasing wafer size, the development of sophisticated metal connections between the chip's circuits, and improvement in thepolymersused for masks (photoresists) in thephotolithographyprocesses.[21]These last two are the areas where miniaturization has moved into the nanometer range.[21] Miniaturization became a trend in the last fifty years and came to cover not just electronic but also mechanical devices.[25]The process for miniaturizing mechanical devices is more complex due to the way the structural properties of mechanical parts change as they are reduced in scale.[25]It has been said that the so-calledThird Industrial Revolution(1969 – c. 2015) is based on economically viable technologies that can shrink three-dimensional objects.[10] Inmedical technology, engineers and designers have been exploring miniaturization to shrink components to the micro and nanometer range. Smaller devices can have lower cost, be made more portable (e.g.: for ambulances), and allow simpler and less invasive medical procedures.[26]
https://en.wikipedia.org/wiki/Miniaturization
Thetechnological singularity—or simply thesingularity[1]—is ahypotheticalpoint in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences forhuman civilization.[2][3]According to the most popular version of the singularity hypothesis,I. J. Good'sintelligence explosionmodel of 1965, an upgradableintelligent agentcould eventually enter a positive feedback loop of successiveself-improvementcycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence which would culminate in a powerfulsuperintelligence, far surpassing allhuman intelligence.[4] The Hungarian-American mathematicianJohn von Neumann(1903-1957) became the first known person to use the concept of a "singularity" in the technological context.[5][6] Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for contemporary discourse on the technological singularity. His pivotal 1950 paper, "Computing Machinery and Intelligence", introduced the idea of a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.[7] Stanislaw Ulamreported in 1958 an earlier discussion with von Neumann "centered on theaccelerating progressof technology and changes in human life, which gives the appearance of approaching some essentialsingularityin the history of the race beyond which human affairs, as we know them, could not continue".[8]Subsequent authors have echoed this viewpoint.[3][9] The concept and the term "singularity" were popularized byVernor Vinge: first in 1983, in an article that claimed that, once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole";[10]and later in his 1993 essay "The Coming Technological Singularity",[4][9]in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate, and he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contribution to wider circulation of the notion wasRay Kurzweil's 2005 bookThe Singularity Is Near, predicting singularity by 2045.[9] Some scientists, includingStephen Hawking, have expressed concerns thatartificial superintelligence(ASI) could result in human extinction.[11][12]The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated.[citation needed] Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, includingPaul Allen,[13]Jeff Hawkins,[14]John Holland,Jaron Lanier,Steven Pinker,[14]Theodore Modis,[15]Gordon Moore,[14]andRoger Penrose.[16]One claim made was that artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.[citation needed] Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according toPaul R. Ehrlich, changed significantly for millennia.[17]However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.[18] If a superhuman intelligence were to be invented—either through theamplification of human intelligenceor through artificial intelligence—it would, in theory, vastly improve over human problem-solving and inventive skills. Such an AI is referred to asSeed AI[19][20]because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AIwould far surpass human cognitive abilities. I. J. Goodspeculated that superhuman intelligence might bring about an intelligence explosion:[21][22] Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. One version of intelligence explosion is where computing power approaches infinity in a finite amount of time. In this version, once AIs are performing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996).[23] A superintelligence, hyperintelligence, or superhuman intelligence is a hypotheticalagentthat possesses intelligence far surpassing that of the brightest and most gifted human minds.[24]"Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent.John von Neumann,Vernor VingeandRay Kurzweildefine the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[4][25] The related concept "speed superintelligence" describes an AI that can function like a human mind, only much faster.[26]For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.[27]Such a difference in information processing speed could drive the singularity.[28] Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances inartificial intelligence(AI) will probably result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence.[29][30]A number offutures studiesfocus on scenarios that combine these possibilities, suggesting that humans are likely tointerface with computers, orupload their minds to computers, in a way that enables substantial intelligence amplification. The 2016 bookThe Age of EmbyRobin Hansondescribes a hypothetical future scenario in which human brains are scanned and digitized, creating "uploads" or digital versions of human consciousness. In this future, the development of these uploads may precede or coincide with the emergence of superintelligent artificial intelligence.[31] Some writers use "the singularity" in a broader way to refer to any radical changes in society brought about by new technology (such asmolecular nanotechnology),[32][33][34]although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[4] There have been numerous dates predicted for the attainment of singularity. In 1965,Goodwrote that it was more probable than not that an ultra-intelligent machine would be built within the twentieth century.[21] That computing capabilities for human-level AI would be available in supercomputers before 2010 was predicted in 1988 byMoravec, assuming that the current rate of improvement continued.[36] The attainment of greater-than-human intelligence between 2005 and 2030 was predicted byVingein 1993.[4] A singularity in 2021 was predicted byYudkowskyin 1996.[23] Human-level AI around 2029 and the singularity in 2045 was predicted by Kurzweil in 2005.[37][38]He reaffirmed these predictions in 2024 inThe Singularity is Nearer.[39] Human-level AI by 2040, and intelligence far beyond human by 2050 was predicted in 1998 by Moravec, revising his earlier prediction.[40] A confidence of 50% thathuman-level AIwould be developed by 2040–2050 was the outcome of four polls of AI researchers, conducted in 2012 and 2013 byBostromandMüller.[41][42] Elon Muskin March 2025 predicted that AI would be smarter than any individual human "in the next year or two" and that AI would be smarter than all humans combined by 2029 or 2030. along with an 80 percent chance that AI would have a “good outcome,” while there was a 20 percent chance of “annihilation.”[43] Prominent technologists and academics dispute the plausibility of a technological singularity, includingPaul Allen,[13]Jeff Hawkins,[14]John Holland,Jaron Lanier,Steven Pinker,[14]Theodore Modis,[15]andGordon Moore,[14]whoselawis often cited in support of the concept.[44] Most proposed methods for creating superhuman ortranshumanminds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence includebioengineering,genetic engineering,nootropicdrugs, AI assistants, directbrain–computer interfacesandmind uploading. These multiple possible paths to an intelligence explosion, all of which will presumably be pursued, makes a singularity more likely.[27] Robin Hansonexpressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult.[45]Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.[citation needed] The possibility of an intelligence explosion depends on three factors.[46]The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. However, as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics may eventually prevent further improvement. There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to thealgorithmsused.[9]The former is predicted byMoore's Lawand the forecasted improvements in hardware,[47]and is comparatively similar to previous technological advances. But Schulman and Sandberg[48]argue that software will present more complex challenges than simply operating on hardware capable of running at human intelligence levels or beyond. A 2017 email survey of authors with publications at the 2015NeurIPSandICMLmachine learning conferences asked about the chance that "the intelligence explosion argument is broadly correct". Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely".[49] Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. An analogy toMoore's Lawsuggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[50][23]Some upper limit on speed may eventually be reached. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity."[14] It is difficult to directly comparesilicon-based hardware withneurons. ButBerglas (2008)notes that computerspeech recognitionis approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as thehuman brain, as well as taking up a lot less space. However, the costs of training systems withdeep learningmay be larger.[citation needed][a] The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[53]that the exponential growth curve could be extended back through earlier computing technologies prior to theintegrated circuit. Ray Kurzweilpostulates alaw of accelerating returnsin which the speed of technological change (and more generally, all evolutionary processes)[54]increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied tonanotechnology),medical technologyand others.[55]Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[56]On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized ashyperbolicrather than exponential.[57] Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[58]He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[59] Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress,Stanislaw Ulamtells of a conversation withJohn von Neumannabout accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[8] Kurzweil claims that technological progress follows a pattern ofexponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predictsparadigm shiftswill become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[60]Kurzweil believes that the singularity will occur by approximately 2045.[55]His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence. Oft-cited dangers include those commonly associated with molecular nanotechnology andgenetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject ofBill Joy's April 2000Wiredmagazine article "Why The Future Doesn't Need Us".[9][61] Some intelligence technologies, like "seed AI",[19][20]may also have the potential to not just make themselves faster, but also more efficient, by modifying theirsource code. These improvements would make further improvements possible, which would make further improvements possible, and so on. The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed]An AI rewriting its own source code could do so while contained in anAI box. Second, as withVernor Vinge's conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different.Eliezer Yudkowskycompares it to the changes that human intelligence brought: humans changed the world thousands of times quicker than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[62] There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended.[63][64] Secondly, AIs could compete for the same scarce resources humankind uses to survive.[65][66]While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans.[67][68][69] Carl ShulmanandAnders Sandbergsuggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[70]An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang".[71] Some critics, like philosophersHubert Dreyfus[72]andJohn Searle,[73]assert that computers or machines cannot achievehuman intelligence. Others, like physicistStephen Hawking,[74]object that whether machines can achieve a true intelligence or merely something similar to intelligence is irrelevant if the net result is the same. PsychologistSteven Pinkerstated in 2008: "There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems."[14] Martin Ford[75]postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the singularity. Job displacement is increasingly no longer limited to those types of work traditionally considered to be "routine".[76] Theodore Modis[77]andJonathan Huebner[78]argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computerclock ratesis slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[79] Theodore Modisholds the singularity cannot happen.[80][15][81]He claims the "technological singularity" and especially Kurzweil lack scientific rigor; Kurzweil is alleged to mistake the logistic function (S-function) for an exponential function, and to see a "knee" in an exponential function where there can in fact be no such thing.[82]In a 2021 article, Modis pointed out that no milestones – breaks in historical perspective comparable in importance to the Internet, DNA, the transistor, or nuclear energy – had been observed in the previous twenty years while five of them would have been expected according to the exponential trend advocated by the proponents of the technological singularity.[83] AI researcherJürgen Schmidhuberstated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[84] Microsoft co-founderPaul Allenargued the opposite of accelerating returns, the complexity brake:[13]the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested byJoseph Tainterin hisThe Collapse of Complex Societies,[85]a law ofdiminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[78]The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse". Hofstadter(2006) raises concern that Ray Kurzweil is not sufficiently scientifically rigorous, that an exponential tendency of technology is not a scientific law like one of physics, and that exponential curves have no "knees".[86]Nonetheless, he did not rule out the singularity in principle in the distant future[14]and in the light ofChatGPTand other recent advancements has revised his opinion significantly towards dramatic technological change in the near future.[87] Jaron Lanierdenies that the singularity is inevitable: "I do not think the technology is creating itself. It's not an autonomous process."[88]Furthermore: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society onnotemphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, andself-determination... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."[88] EconomistRobert J. Gordonpoints out that measured economic growth slowed around 1970 and slowed even further since the2008 financial crisis, and argues that the economic data show no trace of a coming Singularity as imagined by mathematicianI. J. Good.[89] Philosopher and cognitive scientistDaniel Dennettsaid in 2017: "The whole singularity stuff, that's preposterous. It distracts us from much more pressing problems", adding "AI tools that we become hyper-dependent on, that is going to happen. And one of the dangers is that we will give them more authority than they warrant."[90] In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that alog-logchart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologistPZ Myerspoints out that many of the early evolutionary "events" were picked arbitrarily.[91]Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line ona log-log chart.Kelly(2006) argues that the way the Kurzweil chart is constructed with x-axis having time before present, it always points to the singularity being "now", for any date on which one would construct such a chart, and shows this visually on Kurzweil's chart.[92] Some critics suggest religious motivations or implications of singularity, especially Kurzweil's version of it. The buildup towards the singularity is compared with Christian end-of-time scenarios. Beam calls it "aBuck Rogersvision of the hypothetical Christian Rapture".[93]John Graysays "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event".[94] David StreitfeldinThe New York Timesquestioned whether "it might manifest first and foremost—thanks, in part, to the bottom-line obsession of today’sSilicon Valley—as a tool to slash corporate America’s head count."[95] Astrophysicist andscientific philosopherAdam Beckerdebunks Kurzweil's concept of human mind uploads to computers on the grounds that they are too fundamentally different and incompatible.[96] Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from thePaleolithicera until theNeolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[97] The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[98][99]It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even anexistential threat.[100][101]Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including theFuture of Humanity Institute(until 2024), theMachine Intelligence Research Institute,[98]theCenter for Human-Compatible Artificial Intelligence, and theFuture of Life Institute. PhysicistStephen Hawkingsaid in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."[102]Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."[102]Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:[102] So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Berglas (2008)claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.[103][104][105]Anders Sandberghas also elaborated on this scenario, addressing various common counter-arguments.[106]AI researcherHugo de Garissuggests that artificial intelligences may simply eliminate the human racefor access to scarce resources,[65][63]and humans would be powerless to stop them.[107]Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[69] Bostrom (2002)discusses human extinction scenarios, and lists superintelligence as a possible cause: When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. According toEliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[108]Bill Hibbard (2014)harvtxt error: no target: CITEREFBill_Hibbard2014 (help)proposes an AI design that avoids several dangers including self-delusion,[109]unintended instrumental actions,[63][110]and corruption of the reward generator.[110]He also discusses social impacts of AI[111]and testing AI.[112]His 2001 bookSuper-Intelligent Machinesadvocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of amajor evolutionary transitionthat merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article inTrends in Ecology & Evolutionargues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trustartificial intelligencewith our lives throughantilock braking in carsandautopilotsin planes... With one in three courtships leading to marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article further argues that from the perspective of theevolution, several previousMajor Transitions in Evolutionhave transformed life through innovations in information storage and replication (RNA,DNA,multicellularity, andcultureandlanguage). In the current stage of life's evolution, the carbon-based biosphere has generated a system (humans) capable of creating technology that will result in a comparableevolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5zettabytesin 2014 (5×1021bytes).[114] In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×1019bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×1037base pairs, equivalent to 1.325×1037bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year,[56]it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".[113] In February 2009, under the auspices of theAssociation for the Advancement of Artificial Intelligence(AAAI),Eric Horvitzchaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at the Asilomar conference center in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquireautonomy, and to what degree they could use such abilities to pose threats or hazards.[115] Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, somecomputer virusescan evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[115] Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.[116]Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. In a hard takeoff scenario, an artificial superintelligence rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the agent's goals. In a soft takeoff scenario, the AI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AI's development.[118][119] Ramez Naamargues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations.Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form ofMoore's law.[120]Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probablymorethan twice as hard as creating a mind of intelligence 1."[121] J. Storrs Hallbelieves that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at thestarting pointof the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[122] Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. He refers to this scenario as a "semihard takeoff".[123] Max Moredisagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."[124] Eric Drexler, one of the founders ofnanotechnology, theorized in 1986 the possibility of cell repair devices, including ones operating within cells and using as yet hypotheticalbiological machines.[125]According toRichard Feynman, it was his former graduate student and collaboratorAlbert Hibbswho originally suggested to him (circa 1959) the idea of amedicaluse for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essayThere's Plenty of Room at the Bottom.[126] Moravec predicted in 1988 the possibility of "uploading" human mind into a human-like robot, achieving quasi-immortality by extreme longevity via transfer of the human mind between successive new robots as the old ones wear out; beyond that, he predicts later exponential acceleration of subjective experience of time leading to a subjective sense of immortality.[36] Kurzweil suggested in 2005 that medical advances would allow people to protect their bodies from the effects of aging, making thelife expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[127]Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggestssomatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[128] Beyond merely extending the operational life of the physical body,Jaron Lanierargues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious."[129] A paper by Mahendra Prasad, published inAI Magazine, asserts that the 18th-century mathematicianMarquis de Condorcetwas the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.[130] An early description of the idea was made inJohn W. Campbell's 1932 short story "The Last Evolution".[131] In his 1958 obituary forJohn von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[8] In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.[21][22] In 1977,Hans Moravecwrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of "super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind."[132][133]The article describes the human mind uploading later covered in Moravec (1988). The machines are expected to reach human level and then improve themselves beyond that ("Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.") Humans will no longer be needed, and their abilities will be overtaken by the machines: "In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe." While the word "singularity" is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. In this view, there is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 inAnalog Science Fiction and Fact.[134][133] In 1981,Stanisław Lempublished hisscience fictionnovelGolem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency. In 1983,Vernor Vingeaddressed Good's intelligence explosion in print in the January 1983 issue ofOmnimagazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" (although not "technological singularity") in a way that was specifically tied to the creation of intelligent machines:[10][133] We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible. In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcherRay Solomonoffarticulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[9][135] In 1986, Vernor Vinge publishedMarooned in Realtime, a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, the author states that an actual technological singularity would not be the end of the human species: "of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)".[136][137] In 1988, Vinge used the phrase "technological singularity" (including "technological") in the short story collectionThreats and Other Promises, writing in the introduction to his story "The Whirligig of Time" (p. 72):Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, andsoon.When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological "black hole", a technological singularity.[138] In 1988,Hans MoravecpublishedMind Children,[36]in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention "singularity", though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later. Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",[4]spread widely on the internet and helped to popularize the idea.[139]This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[4] Minsky's 1994 article says robots will "inherit the Earth", possibly with the use of nanotechnology, and proposes to think of robots as human "mind children", drawing the analogy from Moravec. The rhetorical effect of that analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their "mind" children. As per Minsky, 'we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.' The feature of the singularity present in Minsky is the development of superhuman artificial intelligence ("million times faster"), but there is no talk of sudden intelligence explosion, self-improving thinking machines or unpredictability beyond any specific event and the word "singularity" is not used.[140] Tipler's 1994 bookThe Physics of Immortalitypredicts a future where super–intelligent machines will build enormously powerful computers, people will be "emulated" in computers, life will reach every galaxy and people will achieve immortality when they reachOmega Point.[141]There is no talk of Vingean "singularity" or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality. In 1996,Yudkowskypredicted a singularity by 2021.[23]His version of singularity involves intelligence explosion: once AIs are doing the research to improve themselves, speed doubles after 2 years, then 1 one year, then after 6 months, then after 3 months, then after 1.5 months, and after more iterations, the "singularity" is reached.[23]This construction implies that the speed reaches infinity in finite time. In 2000,Bill Joy, a prominent technologist and a co-founder ofSun Microsystems, voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology.[61] In 2005, Kurzweil publishedThe Singularity Is Near. Kurzweil's publicity campaign included an appearance onThe Daily Show with Jon Stewart.[142] From 2006 to 2012, an annualSingularity Summitconference was organized byMachine Intelligence Research Institute, founded byEliezer Yudkowsky. In 2007, Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.[33][143]For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.[33] In 2009, Kurzweil andX-PrizefounderPeter Diamandisannounced the establishment ofSingularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[144]Funded byGoogle,Autodesk,ePlanet Ventures, and a group oftechnology industryleaders, Singularity University is based atNASA'sAmes Research CenterinMountain View,California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year. In 2007, the Joint Economic Committee of theUnited States Congressreleased a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[145][146][147] FormerPresident of the United StatesBarack Obamaspoke about singularity in his interview toWiredin 2016:[148] One thing that we haven't talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren't spending a lot of time right now worrying about singularity—they are worrying about "Well, is my job going to be replaced by a machine?"
https://en.wikipedia.org/wiki/Technological_singularity
Insemiconductor electronics,Dennard scaling, also known asMOSFET scaling, is ascaling lawwhich states roughly that, astransistorsget smaller, theirpower densitystays constant, so that the power use stays in proportion with area; bothvoltageandcurrentscale (downward) with length.[1][2]The law, originally formulated forMOSFETs, is based on a 1974 paper co-authored byRobert H. Dennard, after whom it is named.[3] For long MOS transistors (i.e. one side is significantly longer than the other two), with constant electric field inside the MOS, Dennard scaling gives[4]L∝S−1,W∝S−1,tox∝S−1,VDD∝S−1,VT∝S−1,NA∝S,{\displaystyle L\propto S^{-1},W\propto S^{-1},t_{\text{ox}}\propto S^{-1},V_{\text{DD}}\propto S^{-1},V_{\text{T}}\propto S^{-1},N_{\text{A}}\propto S,}where parameters are scaled by a factor of⁠S{\displaystyle S}⁠. Explanation of symbols: In fixed voltage scaling, the supply voltageVDD{\displaystyle V_{\text{DD}}}is held constant (at ~5V) instead of scaling like⁠VDD∝S−1{\displaystyle V_{\text{DD}}\propto S^{-1}}⁠. This results in different scaling exponents. The clock frequency grows faster atS2{\displaystyle S^{2}}instead ofS1{\displaystyle S^{1}}, but at the price of rapidly increasing power density⁠PD∝S3{\displaystyle PD\propto S^{3}}⁠. Fixed voltage scaling was the common scaling regime which ended around 2005 at the "power wall", when it was too difficult to keep the chip cool. Furthermore, at constant supply voltage, the field grows like⁠S1{\displaystyle S^{1}}⁠, and the off-current growsexponentiallywith the field, resulting in high static power consumption since the 90 nm node. Dennard's model of MOSFET scaling implies that, with every technology generation: Moore's lawsays that the number of transistors on a microchip doubles approximately every two years. Combined with Dennard scaling, this means thatperformance per joulegrows even faster, doubling about every 18 months (1.5 years). This trend is sometimes referred to asKoomey's law. The rate of doubling was originally suggested by Koomey to be 1.57 years,[6]but more recent estimates suggest this is slowing.[7] The dynamic (switching) power consumption of CMOS circuits is proportional to frequency.[8]Historically, the transistor power reduction afforded by Dennard scaling allowed manufacturers to drastically raise clock frequencies from one generation to the next without significantly increasing overall circuit power consumption. Specifically, leakage current and threshold voltage do not scale with size, and so the power density increases with scaling. This eventually led to a power density that is too high. This is the "power wall", which caused Intel to cancelTejas and Jayhawkin 2004.[9] Since around 2005–2007 Dennard scaling appears to have broken down. As of 2016, transistor counts in integrated circuits are still growing, but the resulting improvements in performance are more gradual than the speed-ups resulting from significant frequency increases.[1][10]The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges and also causes the chip to heat up, which creates a threat ofthermal runawayand therefore further increases energy costs.[1][10]Since 2005, the clock frequency has stagnated at 4 GHz, and the power consumption per CPU at 100 WTDP. The breakdown of Dennard scaling and resulting inability to increase clock frequencies significantly has caused most CPU manufacturers to focus onmulticore processorsas an alternative way to improve performance. An increased core count benefits many (though by no means all – seeAmdahl's law) workloads, but the increase in active switching elements from having multiple cores still results in increased overall power consumption and thus worsensCPU power dissipationissues.[11][12]The end result is that only some fraction of an integrated circuit can actually be active at any given point in time without violating power constraints. The remaining (inactive) area is referred to asdark silicon.
https://en.wikipedia.org/wiki/Dennard_scaling
Incomputing,performance per wattis a measure of theenergy efficiencyof a particularcomputer architectureorcomputer hardware. Literally, it measures the rate of computation that can be delivered by a computer for everywattof power consumed. This rate is typically measured by performance on theLINPACKbenchmark when trying to compare between computing systems: an example using this is theGreen500list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing thanMoore's Law.[1] System designers buildingparallel computers, such asGoogle's hardware, pick CPUs based on their performance per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.[2] Spaceflight computers have hard limits on the maximum power available and also have hard requirements on minimum real-time performance. A ratio of processing speed to required electrical power is more useful than raw processing speed.[3] The performance and power consumption metrics used depend on the definition; reasonable measures of performance areFLOPS,MIPS, or the score for anyperformance benchmark. Several measures of power usage may be employed, depending on the purposes of the metric; for example, a metric might only consider the electrical power delivered to a machine directly, while another might include all power necessary to run a computer, such as cooling and monitoring systems. The power measurement is often the average power used while running the benchmark, but other measures of power usage may be employed (e.g. peak power, idle power). For example, the earlyUNIVAC Icomputer performed approximately 0.015 operations per watt-second (performing 1,905 operations per second (OPS), while consuming 125 kW). TheFujitsuFR-VVLIW/vector processorsystem on a chipin the 4 FR550 core variant released 2005 performs 51 Giga-OPS with 3 watts of power consumption resulting in 17 billion operations per watt-second.[4][5]This is an improvement by over a trillion times in 54 years. Most of the power a computer uses is converted into heat, so a system that takes fewer watts to do a job will require less cooling to maintain a givenoperating temperature. Reduced cooling demands makes it easier toquiet a computer. Lower energy consumption can also make it less costly to run, and reduce the environmental impact of powering the computer (seegreen computing). If installed where there is limitedclimate control, a lower power computer will operate at a lower temperature, which may make it more reliable. In a climate controlled environment, reductions in direct power use may also create savings in climate control energy. Computing energy consumption is sometimes also measured by reporting the energy required to run a particular benchmark, for instanceEEMBCEnergyBench. Energy consumption figures for a standard workload may make it easier to judge the effect of an improvement inenergy efficiency. When performance is defined as⁠operations/second⁠, then performance per watt can be written as⁠operations/watt-second⁠. Since a watt is one⁠joule/second⁠, then performance per watt can also be written as⁠operations/joule⁠. FLOPS per wattis a common measure. Like theFLOPS(Floating PointOperations Per Second) metric it is based on, the metric is usually applied toscientific computingand simulations involving manyfloating pointcalculations. As of June 2016[update], the Green500 list rates the two most efficient supercomputers highest – those are both based on the samemanycoreacceleratorPEZY-SCnpJapanese technology in addition to Intel Xeon processors – both atRIKEN, the top one at 6673.8 MFLOPS/watt; and the third ranked is the Chinese-technologySunway TaihuLight(a much bigger machine, that is the ranked 2nd onTOP500, the others are not on that list) at 6051.3 MFLOPS/watt.[6] In June 2012, the Green500 list ratedBlueGene/Q, Power BQC 16Cas the most efficient supercomputer on the TOP500 in terms of FLOPS per watt, running at 2,100.88 MFLOPS/watt.[7] In November 2010, IBM machine,Blue Gene/Qachieves 1,684 MFLOPS/watt.[8][9] On 9 June 2008, CNN reported thatIBM's Roadrunnersupercomputer achieves 376 MFLOPS/watt.[10][11] As part of theIntel Tera-Scaleresearch project, the team produced an 80-core CPU that can achieve over 16,000 MFLOPS/watt.[12][13]The future of that CPU is not certain. Microwulf, a low cost desktopBeowulf clusterof four dual-coreAthlon 64 X23800+ computers, runs at 58 MFLOPS/watt.[14] Kalray has developed a 256-core VLIW CPU that achieves 25,000 MFLOPS/watt. Next generation is expected to achieve 75,000 MFLOPS/watt.[15]However, in 2019 their latest chip for embedded is 80-core and claims up to 4 TFLOPS at 20 W.[16] Adaptevaannounced theEpiphany V, a 1024-core 64-bit RISC processor intended to achieve 75 GFLOPS/watt,[17][18]while they later announced that the Epiphany V was "unlikely" to become available as a commercial product US Patent10,020,436, July 2018 claims three intervals of 100, 300, and 600 GFLOPS/watt. Graphics processing units(GPU) have continued to increase in energy usage, while CPUs designers have recently[when?]focused on improving performance per watt. High performance GPUs may draw large amount of power, therefore intelligent techniques are required to manage GPU power consumption. Measures like3DMark2006 scoreper watt can help identify more efficient GPUs.[19]However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks.[20] With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design. Since GPUs may also be used for somegeneral purpose computation, sometimes their performance is measured in terms also applied to CPUs, such as FLOPS per watt. While performance per watt is useful, absolute power requirements are also important. Claims of improved performance per watt may be used to mask increasing power demands. For instance, though newer generation GPU architectures may provide better performance per watt, continued performance increases can negate the gains in efficiency, and the GPUs continue to consume large amounts of power.[22] Benchmarks that measure power under heavy load may not adequately reflect typical efficiency. For instance, 3DMark stresses the 3D performance of a GPU, but many computers spend most of their time doing less intense display tasks (idle, 2D tasks, displaying video). So the 2D or idle efficiency of the graphics system may be at least as significant for overall energy efficiency. Likewise, systems that spend much of their time in standby orsoft offare not adequately characterized by just efficiency under load. To help address this some benchmarks, likeSPECpower, include measurements at a series of load levels.[23] The efficiency of some electrical components, such asvoltage regulators, decreases with increasing temperature, so the power used may increase with temperature. Power supplies, motherboards, and some video cards are some of the subsystems affected by this. So their power draw may depend on temperature, and the temperature or temperature dependence should be noted when measuring.[24][25] Performance per watt also typically does not include fulllife-cycle costs. Since computer manufacturing is energy intensive, and computers often have a relatively short lifespan, energy and materials involved in production, distribution,disposalandrecyclingoften make up significant portions of their cost, energy use, and environmental impact.[26][27] Energy required for climate control of the computer's surroundings is often not counted in the wattage calculation, but it can be significant.[28] SWaP (space, wattage and performance) is aSun Microsystemsmetric fordata centers, incorporating power and space: Where performance is measured by any appropriate benchmark, and space is size of the computer.[29] Reduction of power, mass, and volume is also important for spaceflight computers.[3]
https://en.wikipedia.org/wiki/Performance_per_watt
Swanson's lawis the observation that the price of solarphotovoltaic modulestends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs go down 75% about every 10 years.[3] It is named afterRichard Swanson, the founder ofSunPower Corporation, a solar panel manufacturer.[4]The termSwanson's Lawappears to have originated with an article inThe Economistpublished in late 2012.[5][6]Swanson had been presenting such curves at technical conferences for several years.[7] Swanson's law has been compared toMoore's law, which predicts the growing computing power of processors. Swanson's Law is a solar industry specific application of the more generalWright's Lawwhich states there will be a fixed cost reduction for each doubling of manufacturing volume. The method used by Swanson is more commonly referred to aslearning curveor more preciseexperience curveanalysis. It was first developed and applied to the aeronautics industry in 1936 byTheodore Paul Wright.[8]There are reports of it first being applied to the photovoltaics industry in 1975, and saw wider use starting in the early 1990s.[9] Crystalline siliconphotovoltaic cell prices have fallen from $76.67 per watt in 1977 to $0.36 per watt in 2014.[5][6][10]Plotting the module price (in $/Wp) versus time shows a dropping by 10% per year.[11]
https://en.wikipedia.org/wiki/Swanson%27s_law
Etymology(/ˌɛtɪˈmɒlədʒi/ET-im-OL-ə-jee[1]) is the study of the origin and evolution of words—including their constituent units ofsoundandmeaning—across time.[2]In the 21st century a subfield withinlinguistics, etymology has become a more rigorously scientific study.[1]Most directly tied tohistorical linguistics,philology, andsemiotics, it additionally draws upon comparativesemantics,morphology,pragmatics, andphoneticsin order to attempt a comprehensive and chronological catalogue of all meanings and changes that a word (and its related parts) carries throughout its history. The origin of any particular word is also known as itsetymology. For languages with a longwritten history, etymologists make use of texts, particularly texts about the language itself, to gather knowledge about how words were used during earlier periods, how they developed in meaning andform, or when and how they entered the language. Etymologists also apply the methods ofcomparative linguisticsto reconstruct information about forms that are too old for any direct information to be available. By analyzing related languages with a technique known as thecomparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way,word rootsin many European languages, for example, can be traced back to the origin of theIndo-European language family. Even though etymological research originated from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such asUralicandAustronesian. The wordetymologyis derived from the Ancient Greek wordἐτυμολογία(etumologíā), itself fromἔτυμον(étumon), meaning'true sense or sense of a truth', and the suffix-logia, denoting'the study or logic of'.[3][4] Theetymonrefers to the predicate (i.e. stem[5]or root[6]) from which a later word or morpheme derives. For example, the Latin wordcandidus, which means'white', is the etymon of Englishcandid. Relationships are often less transparent, however. Englishplace namessuch asWinchester,Gloucester,Tadcastershare different forms of asuffixthat originated as the Latincastrum'fort'. Reflexis the name given to a descendant word in a daughter language, descended from an earlier language. For example, Modern English heat is the reflex of the Old Englishhǣtu. Rarely, this word is used in reverse, and the reflex is actually the root word rather than the descendant word. However, this usage is usually filled by the termetymoninstead. A reflex will sometimes be described simply as adescendant,derivativeorderivedfrom an etymon (but see below).[citation needed] Cognatesorlexical cognatesare sets of words that have been inherited in direct descent from an etymological ancestor in a common parent language.[7]Doubletsoretymological twinsortwinlings(or possibly triplets, and so forth) are specifically cognates within the same language. Although they have the same etymological root, they tend to have different phonological forms, and to have entered the language through different routes. Arootis the source of related words within a single language (no language barrier is crossed). Similar to the distinction betweenetymonandroot, a nuanced distinction can sometimes be made between adescendantand aderivative. Aderivativeis one of the words which have their source in a root word, and were at some time created from the root word using morphological constructs such as suffixes, prefixes, and slight changes to the vowels or to the consonants of the root word. For example:unhappy,happily, andunhappilyare all derivatives of the root wordhappy. The termsrootandderivativeare used in the analysis ofmorphologicalderivation within a language in studies that are not concerned with historical linguistics and that do not cross the language barrier. Etymologists apply a number of methods to study the origins of words, some of which are: Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which arelanguage change, borrowing (i.e., the adoption ofloanwordsfrom other languages);word formationsuch asderivationandcompounding; andonomatopoeiaandsound symbolism(i.e., the creation of imitative words such asclickorgrunt). While the origin of newly emerged words is often more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due tosound change, it is not readily obvious that the English wordsetis related to the wordsit(the former is originally acausativeformation of the latter). It is even less obvious thatblessis related toblood(the former was originally a derivative term meaning 'to mark with blood'). Semantic change may also occur. For example, the English wordbeadoriginally meant 'prayer', and acquired its modern meaning through the practice of counting the recitation of prayers by using small objects strung together (beads). One type of semantic change involves the quotidianisation ofmetaphor.[8]Thus the word "trauma", the predecessors of which apparently referenced an "open hole" in the body, has passed through some metaphorical stage or stages and now often refers to some sort of psychological wound.[9] The search for meaningful origins for familiar or strange words is far older than the modern understanding of linguistic evolution and the relationships of languages, which began no earlier than the 18th century. Etymology has been a form of witty wordplay, in which the supposed origins of words were creatively imagined to satisfy contemporary requirements. For example, the Greek poetPindar(bornc.522 BCE) employed inventive etymologies to flatter his patrons.Plutarchemployed etymologies insecurely based on fancied resemblances in sounds.Isidore of Seville'sEtymologiaewas an encyclopedic tracing of "first things" that remained uncritically in use in Europe until the sixteenth century.Etymologicum Genuinumis a grammatical encyclopedia edited atConstantinopleduring the 9th century, one of several similarByzantineworks. The 13th-centuryGolden Legend, as written byJacobus de Voragine, begins eachhagiographyof a saint with a fancifulexcursusin the form of an etymology.[10] Inancient India,Sanskritlinguists and grammarians were the first to undertake comprehensive analyses of linguistics and etymology. The study of Sanskrit etymology has provided Western scholars with the basis ofhistorical linguisticsand modern etymology. Four of the most famous Sanskrit linguists are: These were not the earliest Sanskrit grammarians, but rather followed an earlier line of scholars who lived several centuries earlier, who includedŚākaṭāyana(814–760 BCE), and of whom very little is known. The earliest of attested etymologies can be found in theVedas, in the philosophical explanations of theBrahmanas,Aranyakas, andUpanishads. The analyses ofSanskrit grammardone by the previously mentioned linguists involved extensive studies on the etymology (calledNiruktaorVyutpattiin Sanskrit) of Sanskrit words, because the ancient Indians considered sound and speech itself to be sacred and, for them, the words of the Vedas contained deep encoding of the mysteries of the soul and God. One of the earliest philosophical texts of the Classical Greek period to address etymology was theSocratic dialogueCratylus(c.360 BCE) byPlato. During much of the dialogue,Socratesmakes guesses as to the origins of many words, including the names of the gods. In hisodes, Pindar spins complimentary etymologies to flatter his patrons.Plutarch(Life ofNuma Pompilius) spins an etymology forpontifex, while explicitly dismissing the obvious, and actual "bridge-builder": The priests, called Pontifices.... have the name of Pontifices frompotens, powerful because they attend the service of the gods, who have power and command overall. Others make the word refer to exceptions of impossible cases; the priests were to perform all the duties possible; if anything lays beyond their power, the exception was not to be cavilled. The most common opinion is the most absurd, which derives this word from pons, and assigns the priests the title of bridge-makers. The sacrifices performed on the bridge were amongst the most sacred and ancient, and the keeping and repairing of the bridge attached, like any other public sacred office, to the priesthood. Isidore of Sevillecompiled a volume of etymologies to illuminate the triumph of religion. Each saint's legend inJacobus de Voragine'sGolden Legendbegins with an etymological discourse on their name: Lucy is said of light, and light is beauty in beholding, after that S. Ambrose saith: The nature of light is such, she is gracious in beholding, she spreadeth over all without lying down, she passeth in going right without crooking by right long line; and it is without dilation of tarrying, and therefore it is showed the blessed Lucy hath beauty of virginity without any corruption; essence of charity without disordinate love; rightful going and devotion to God, without squaring out of the way; right long line by continual work without negligence of slothful tarrying. In Lucy is said, the way of light.[11] Etymology in the modern sense emerged in the late 18th-century European academia, in the context of theAge of Enlightenment, although preceded by 17th-century pioneers such asMarcus Zuerius van Boxhorn,Gerardus Vossius,Stephen Skinner,Elisha Coles, andWilliam Wotton. The first known systematic attempt to prove the relationship between two languages on the basis of similarity ofgrammarandlexiconwas made in 1770 by the Hungarian,János Sajnovics, when he attempted to demonstrate the relationship betweenSamiandHungarian.[12] The origin of modernhistorical linguisticsis often traced toWilliam Jones, a Welsh philologist living in India, who in 1782 observed the genetic relationship between Greek and Latin. Jones published hisThe Sanscrit Languagein 1786, laying the foundation for the field ofIndo-European studies. However, as early as 1727, a Jesuit missionary in India, père Gargam, theorized that Sanskrit could be a "mother tongue arrived from another country" forTeluguandKannadabecause they contained many of the same Sanskrit terms; and in a letter to Abbé Barthélemy of theAcadémie des Inscriptions et Belles Lettresin 1767, another Jesuit missionary in India, pèreGaston-Laurent Coeurdoux, posed the question of the origin of the Sanskrit language and systematically argued his hypothesis of a "commune origine" of Sanskrit, Latin, and Greek, even putting Sanskrit terms and their Latin equivalents in columns.[13]Although they sent many Sanskrit-related texts to theBibliothèque du roi, such as literary translations, grammars, dictionaries, and other works, theJesuit Missionariesin theCarnatic Regionbetween 1695–1762, includingJean Calmette, Coeurdoux, Gargam,Jean François Pons, and others, have only recently begun receiving more attention in modern scholarship for their early contributions to fields like Indo-European Studies, historical linguistics, and comparative philology.[13][14] The study of etymology inGermanic philologywas introduced byRasmus Raskin the early 19th century and elevated to a high standard with theDeutsches Wörterbuch(German Dictionary) compiled by theBrothers Grimm. The successes of the comparative approach culminated in theNeogrammarianschool of the late 19th century. Still,Friedrich Nietzscheused etymological strategies (principally and most famously inOn the Genealogy of Morality, but also elsewhere) to argue that moral values have definite historical origins, where the meaning of concepts such as good and evil are shown to have changed over time according to the value-system that appropriates them. This strategy gained popularity in the 20th century, and philosophers, such asJacques Derrida, have used etymologies to indicate former meanings of words to de-center the "violent hierarchies" of Western philosophy.
https://en.wikipedia.org/wiki/Etymology
Aneponymis a person (real or fictitious) from whom something is said to take its name. The word is back-formed from "eponymous", from the Greek "eponymos" meaning "giving name". Here is alist of eponyms:
https://en.wikipedia.org/wiki/List_of_eponyms
Most legal doctrines are named after the cases. This section only includes doctrines named after the judges who formulated them.
https://en.wikipedia.org/wiki/List_of_eponymous_doctrines
Aneponymous diseaseis adisease, disorder, condition, or syndromenamed after a person, usually thephysicianor other health care professional who first identified the disease; less commonly, a patient who had the disease; rarely, a literary character who exhibited signs of the disease or an actor or subject of an allusion, as characteristics associated with them were suggestive of symptoms observed in the disorder. Eponymsare a longstanding tradition in Western science and medicine. Being awarded an eponym is regarded as an honor: "Eponymity, not anonymity, is the standard."[1]The scientific and medical communities regard it as bad form to attempt to form eponyms after oneself.[2] Ideally, to discuss something, it should have a name. When medicine lacked diagnostic tools to investigate and definitively pinpoint the underlying causes of mostdiseases, assigning an eponym afforded physicians a concise label for a symptom cluster versus cataloguing the multiple systemic features that characterized a patient’s illness. Most commonly, diseases arenamed forthe person, usually a physician, but occasionally another health care professional, who first described the condition—typically by publishing an article in a respectedmedical journal. Less frequently, an eponymous disease is named after a patient, examples beingLou Gehrig diseaseandHartnup disease. In the instance ofMachado–Joseph disease, the eponym is derived from the surnames of two families in which the condition was initially described. Examples of eponyms named for persons who displayed characteristics attributed to a syndrome include:Lazarus syndrome, named for a biblical character; and Miss Havisham syndrome, named for aDickenscharacter, andPlyushkin syndrome, named for aGogolcharacter, both fictional persons (the latter two also happen to be alternative names for the same symptom complex). Two eponymous disorders that follow none of the foregoing conventions are:Fregoli delusion, which derives its name from an actor whose character shifts mimicked the delusion it describes; and,Munchausen syndromewhich derives from a literary allusion to Baron von Munchausen, whose personal habits were suggestive of the symptom cluster associated with it. Disease naming conventions which reference place names (such asBornholm disease,Lyme disease, andEbola virus disease) are properly termed toponymic, although an NLM/NIH online publication described them as eponymic.[3]Diseases named for animals with which they are associated, usually as a vector, are properly styled as zoonymic; cat scratch fever and monkeypox are examples. Those named for association with a particular occupation or trade, examples of which includenun's knee,tennis elbow, andmad hatter's disease, are properly described as occupational diseases. In May 2015, theWorld Health Organization, in collaboration with theWorld Organisation for Animal Health(OIE) and the Food and Agriculture Organization of the United Nations (FAO), released a statement on the Best Practices for the Naming of New Human Infectious Diseases "with the aim to minimize unnecessary negative impact of disease names on trade, travel, tourism or animal welfare, and avoid causing offence to any cultural, social, national, regional, professional or ethnic groups."[4]These guidelines emerged in response to backlash against people and places, based on the vernacular names of infectious diseases such asMiddle East respiratory syndrome, and the2009 swine flu pandemic.[5]These naming conventions are not intended to replace theInternational Classification of Diseases, but rather, are guidelines for scientists, national authorities, the national and international media and other stakeholders who may be the first to discuss a disease publicly. In 1975, the Canadian National Institutes of Health held a conference that discussed the naming of diseases and conditions. This was reported inThe Lancetwhere the conclusion was summarized as: "The possessive use of an eponym should be discontinued, since the author neither had nor owned the disorder."[6]Medical journals,dictionariesandstyle guidesremain divided on this issue. European journals tend towards continued use of the possessive, while US journals are largely discontinuing its use.[7]The trend in possessive usage varies between countries, journals, and diseases.[8] The problem is, in fact, that thepossessivecase was given its misleading name for historical reasons and that now even educated people, if they are not linguists, often make incorrect assumptions and decisions based on this misleading name. Nevertheless, no native speakers would accept the ungrammatical "men department" as a possible way of saying "men's department" nor claim that this "possessive" and obligatory apostrophe in any way implies that men possess the department. This case was called thegenitiveuntil the 18th century and (like the genitive case in other languages) in fact expresses much more thanpossession. For example, in the expressions "the school's headmaster", "the men's department", and "tomorrow's weather", the school does not own/possess the headmaster, men do not own/possess the department, and tomorrow does not/will not own the weather. Most disagreements about the use of possessive forms of nouns and of theapostropheare due to the erroneous opinion that a term should not use an apostrophe if it does not express possession.[9] In the words ofMerriam-Webster's Dictionary of English Usage:[10] The argument is a case of fooling oneself with one's own terminology. After the 18th-century grammarians began to refer to the genitive case as the possessive case, grammarians and other commentators got it into their heads that the only use of the case was to show possession.... Simply changing the name of the genitive does not change or eliminate any of its multiple functions. This dictionary also cites a study[11]which found that only 40% of the possessive forms were used to indicate actual possession.[12] Associating an individual's name with a disease merely based on describing it confers only an eponymic; the individual must have been either affected by the disease or have died from it for the name to be termed auto-eponymic. Thus, an 'auto-eponym' is a medical condition named in honor of: a physician or other health care professional who was affected by or died as a result of the disease which he had described or identified; or, a patient, who was not a health care professional, but suffered from or died as a result of the disease.[13]Auto-eponyms may use either the possessive or non-possessive form, with the preference to use the non-possessive form for a disease named for a physician or health care professional who first described it and the possessive form in cases of a disease named for a patient (commonly, but not always, the first patient) in whom the particular disease was identified.[14]Autoeponyms listed in this entry conform to those conventions with regard to the possessive and non-possessive forms. Examples of autoeponyms include: The current trend is away from the use of eponymous disease names and towards a medical name that describes either the cause or primary signs.[4]Reasons for this include: Arguments for maintaining eponyms include:[citation needed] The usage of the genitive apostrophe in disease eponyms has followed different trends. While it remains common for some diseases, it has dwindled for others.[17] As described above, multiple eponyms can exist for the same disease. In these instances, each is listed individually (except as described in item 1 below), followed by an in-line parenthetical entry beginning 'aka' ('also known as') that lists all alternative eponyms. This facilitates the use of the list for a reader who knows a particular disease only by one of its eponyms, without the necessity of cross-linking entries. It sometimes happens that an alternative eponym, if listed separately, would immediately alphabetically precede or succeed another eponymous entry for the same disease. Three conventions have been applied to these cases: Some eponyms have an alternative entry that includes the name(s) of additional individuals. An example is Adams-Stokes syndrome; one of its alternative eponyms is Gerbec–Morgagni–Adams–Stokes syndrome. The entry for Adams-Stokes only names the two individuals (Adams and Stokes) whose names are associated with the entry as listed; the later entry for the alternative Gerbec–Morgagni–Adams–Stokes syndrome names all four of the individuals (Gerbec, Morgani, Adams, and Stokes) who are associated with the longer named entry. Signs and symptomsSyndromeDisease Medical diagnosisDifferential diagnosisPrognosis AcuteChronicCure Eponymous diseaseAcronym or abbreviationRemission
https://en.wikipedia.org/wiki/List_of_eponymous_diseases
This is a list ofetymologicallists. See:Medical terminology African—Americas—Arabic—Austronesian—Basque/Iberian—Celtic—Chinese— Etruscan —French—Germanic—Greek—Indo-Aryan— Iranian —Italic—Latin—Semitic—Turkic—uncertain—various Dacian
https://en.wikipedia.org/wiki/Lists_of_etymologies
This list includes well known paradoxes, grouped thematically. The grouping is approximate, as paradoxes may fit into more than one category. This list collects only scenarios that have been called aparadoxby at least one source and have their own article in this encyclopedia. These paradoxes may be due to fallacious reasoning (falsidical), or an unintuitive solution (veridical). The termparadoxis often used to describe a counter-intuitive result. However, some of these paradoxes qualify to fit into the mainstream viewpoint of a paradox, which is a self-contradictory result gained even while properly applying accepted ways ofreasoning. These paradoxes, often calledantinomy,point out genuine problems in our understanding of the ideas oftruthanddescription. These paradoxes,insolubilia(insolubles), have in common a contradiction arising from eitherself-referenceorcircular reference, in which several statements refer to each other in a way that following some of the references leads back to the starting point. One class of paradoxes in economics are theparadoxes of competition, in which behavior that benefits a lone actor would leave everyone worse off if everyone did the same. These paradoxes are classified into circuit, classical and Marx paradoxes.
https://en.wikipedia.org/wiki/List_of_paradoxes
This is a list ofscientific laws named after people(eponymous laws). For other lists of eponyms, seeeponym.
https://en.wikipedia.org/wiki/List_of_scientific_laws_named_after_people
This is a list of notabletheorems. Lists of theorems and similar statements include: Most of the results below come frompure mathematics, but some are fromtheoretical physics,economics, and otherappliedfields.
https://en.wikipedia.org/wiki/List_of_theorems
This is a list ofscientific phenomenaand concepts named after people(eponymous phenomena). For other lists of eponyms, seeeponym.
https://en.wikipedia.org/wiki/Scientific_phenomena_named_after_people
Inartificial intelligence(AI), afoundation model, also known aslarge X model (LxM), is amachine learningordeep learningmodel trained on vast datasets so that it can be applied across a wide range of use cases.[1]Generative AIapplications likelarge language models(LLM) are common examples of foundation models.[1] Building foundation models is often highly resource-intensive, with the most advanced models costing hundreds of millions of dollars to cover the expenses of acquiring, curating, and processing massive datasets, as well as the compute power required for training.[2]These costs stem from the need for sophisticated infrastructure, extended training times, and advanced hardware, such asGPUs. In contrast, adapting an existing foundation model for a specific task or using it directly is far less costly, as it leverages pre-trained capabilities and typically requires only fine-tuning on smaller, task-specific datasets. Early examples of foundation models arelanguage models(LMs) likeOpenAI's GPTseries andGoogle'sBERT.[3][4]Beyond text, foundation models have been developed across a range of modalities—includingDALL-Eand Flamingo[5]for images, MusicGen[6]for music, and RT-2[7]for robotic control. Foundation models are also being developed for fields like astronomy,[8]radiology,[9]genomics,[10]music,[11]coding,[12]times-seriesforecasting,[13]mathematics,[14]and chemistry.[15] The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) coined the term "foundation model" in August 2021[16]to mean "any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tuned) to a wide range of downstream tasks".[17]This was based on their observation that preexisting terms, while overlapping, were not adequate, stating that "'(large) language model' was too narrow given [the] focus is not only language; 'self-supervised model' was too specific to the training objective; and 'pretrained model' suggested that the noteworthy action all happened after 'pretraining."[18]The term "foundation model" was chosen over "foundational model"[19]because "foundational" implies that these models provide fundamental principles in a way that "foundation" does not.[20] As governments regulate foundation models, new legal definitions have emerged. The United States's definitions are the only ones to make reference to the size of a foundation model, and differ on magnitude. Beyer and Eshoo's definition also specifies that foundation models must achieve a level of performance as to be a potential danger. In contrast, the E.U. definition requires the model to be designed for generality of output. All definitions agree that foundation models must be trained on a broad range of data with potential applications in many domains. Technologically, foundation models are built using established machine learning techniques likedeep neural networks,transfer learning, andself-supervised learning. Foundation models differ from previous techniques as they are general purpose models function as a reusable infrastructure, instead of bespoke and one-off task-specific models. Advances in computer parallelism (e.g.,CUDA GPUs) and new developments in neural network architecture (e.g.,Transformers), and the increased use of training data with minimal supervision all contributed to the rise of foundation models. Foundation models began to materialize as the latest wave of deep learning models in the late 2010s.[23]Relative to most prior work on deep learning, these language models demonstrated the potential of training on much larger web-sourced datasets using self-supervised objectives (e.g. predicting the next word in a large corpus of text). These approaches, which draw upon earlier works likeword2vecandGloVe, deviated from prior supervised approaches that required annotated data (e.g. crowd-sourced labels). The 2022 releases ofStable DiffusionandChatGPT(initially powered by the GPT-3.5 model) led to foundation models and generative AI entering widespread public discourse. Further, releases ofLLaMA, Llama 2, andMistralin 2023 contributed to a greater emphasis placed on how foundation models are released with open foundation models garnering a lot of support[24]and scrutiny.[25] Certain highly advanced foundation models are termed "frontier models", which have the potential to "possess dangerous capabilities sufficient to pose severe risks to public safety."[26]These "dangerous capabilities" stem from the accidental or intentional misuse of such models, which in conjunction with their powerful nature can lead to severe harms. As foundation models continue to improve, some AI researchers speculate that almost all next-generation foundation models will be considered frontier models. Since the concept of dangerous capabilities is inherently subjective, there is no strict designation for what foundation models qualify as frontier models. However, some generally held ideas for sufficiently dangerous capabilities include: Due to frontier models' unique capabilities, it is difficult to effectively regulate their development and deployment. Because of their emergent nature, new dangerous capabilities can appear on their own in frontier models, both in the development stage and after being deployed.[26]Additionally, since frontier models continue to adapt after deployment, it remains difficult to mitigate all harms that arise from already-deployed models. If a frontier model happens to be open-source or is released online, the model can also disseminate rapidly, further hampering regulators by creating a lack of accountability. Due to their adaptability to a wide range of use-cases, foundation models are sometimes considered to be examples of general-purpose AI. In designing the EU AI Act, the European Parliament has stated that a new wave of general-purpose AI technologies shapes the overall AI ecosystem.[31]The fuller structure of the ecosystem, in addition to the properties of specific general-purpose AI systems, influences the design of AI policy and research.[32]General-purpose AI systems also often appear in people's everyday lives through applications and tools likeChatGPTorDALL-E. Government agencies like EU Parliament have identified regulation of general-purpose AI, such as foundation models, to be a high priority. General-purpose AI systems are often characterized by large size, opacity, and potential for emergence, all of which can create unintended harms. Such systems also heavily influence downstream applications, which further exacerbates the need for regulation. In regards to prominent legislation, a number of stakeholders have pushed for theEU AI Actto include restrictions on general-purpose AI systems, all of which would also apply to foundation models. For a foundation model to effectively generalize, it must acquire rich representations of the training data. As a result, expressive model architectures that efficiently process large-scale data are often preferred in building foundation models.[17]Currently, theTransformerarchitecture is the de facto choice for building foundation models across a range of modalities.[33] Foundation models are built by optimizing a training objective(s), which is a mathematical function that determines how model parameters are updated based on model predictions on training data.[34]Language models are often trained with a next-tokens prediction objective, which refers to the extent at which the model is able to predict the next token in a sequence. Image models are commonly trained with contrastive learning or diffusion training objectives. For contrastive learning, images are randomly augmented before being evaluated on the resulting similarity of the model's representations. For diffusion models, images are noised and the model learns to gradually de-noise via the objective. Multimodal training objectives also exist, with some separating images and text during training, while others examine them concurrently.[35]In general, the training objectives for foundation models promote the learning of broadly useful representations of data. With the rise of foundation models and the larger datasets that power them, a training objective must be able to parse through internet-scale data for meaningful data points. Additionally, since foundation models are designed to solve a general range of tasks, training objectives ought to bedomain complete, or able to solve a broad set of downstream capabilities within the given domain. Lastly, foundation model training objectives should seek to scale well and be computationally efficient. With model size and compute power both being relevant constraints, a training objective must be able to overcome such bottlenecks. Foundation models are trained on a large quantity of data, working under the maxim "the more data, the better."[36]Performance evaluation does show that more data generally leads to better performance, but other issues arise as data quantity grows. Tasks like managing the dataset, integrating data across new applications, ensuring adherence to data licenses, and maintaining data quality all become more difficult as data size grows. The specific demands of foundation models have only exacerbated such issues, as it remains the norm for large foundation models to use public web-scraped data. Foundation models include also search engines data and SEO meta tags data. Public web data remains a plentiful resource, but it also demands stringent moderation and data processing from foundation model developers before it can be successfully integrated into the training pipeline.[37] Training foundation models often runs the risk of violating user privacy, as private data can be disclosed, collected, or used in ways beyond the stated scope. Even if no private data is leaked, models can still inadvertently compromise security through learned behavior in the resulting foundation model.[38]Data quality is another key point, as web-scraped data frequently contains biased, duplicate, and toxic material. Once foundation models are deployed, ensuring high-quality data is still an issue, as undesirable behavior can still emerge from small subsets of data. The size of foundation models also brings about issues with the computer systems they run on. The average foundation model is too large to be run within a single accelerator's memory and the initial training process requires an expensive amount of resources.[39]Such issues are predicted to further exacerbate in future as foundation models grow to new heights. Due to this constraint, researchers have begun looking into compressing model size through tight model inference. GPUs are the most common choice of compute hardware for machine learning, due to high memory storage and strong power. Typical foundation model training requires many GPUs, all connected in parallel with fast interconnects. Acquiring a sufficient amount of GPUs of requisite compute efficiency is a challenge for many foundation model developers, one that has led to an increasing dilemma in the field. Larger models require greater compute power, but often at the cost of improved compute efficiency. Since training remains time-consuming and expensive, the tradeoff between compute power and compute efficiency has led only a few select companies to afford the production costs for large, state of the art foundation models. Some techniques like compression and distillation can make inference more affordable, but they fail to completely shore up this weakness. The accuracy and capabilities of foundation models often scale predictably with the size of the model and the amount of the training data. Specifically, scaling laws have been discovered, which are data-based empirical trends that relate resources (data, model size, compute usage) to model capabilities. Particularly, a model's scale is defined by compute, dataset size, and the number of parameters, all of which exhibit a power-law relationship with end performance. However,broken scaling laws[40]have been discovered in which this relationship smoothly transitions (at points referred to asbreak(s)) from a power law with one exponent to a power law with another (different) exponent. When one does not collect any points near (or after) the break(s), it can be difficult to obtain an accurate extrapolation. Foundation models are inherently multi-purpose: to use these model for a specific use case requires some form of adaptation. At a minimum, models need to be adapted to perform the task of interest (task specification), but often better performance can be achieved by more extensive adaptation to the domain of interest (domain specialization). A variety of methods (e.g.prompting,in-context learning,fine-tuning,LoRA) provide different tradeoffs between the costs of adaptation and the extent to which models are specialized. Some major facets to consider when adapting a foundation model are compute budget and data availability. Foundation models can be very large, up to trillions of parameters in size, so adapting the entirety of a foundation model can be computationally expensive. Therefore, developers sometimes adapt only the last neural layer or only the bias vectors to save time and space.[41]For particularly niche applications, specific data may also not be available to adapt the foundation model sufficiently. In such circumstances, data must be manually labeled, which is costly and can demand expert knowledge. Evaluation is a key part of developing foundation models. Not only does evaluation allow for tracking progress of high-performance models, it also creates benchmarks for future model development. Stakeholders rely on evaluations to understand model behaviors and gain insight into their various attributes. Traditionally, foundation models are evaluated relative to each other through standardized task benchmarks likeMMLU,[42]MMMU,[43]HumanEval,[44]and GSM8K.[45]Given that foundation models are multi-purpose, increasingly meta-benchmarks are developed that aggregate different underlying benchmarks. Examples include LM-Harness,[46]BIG-Bench,[47]HELM,[48]OpenLLM Leaderboard,[49]DecodingTrust,[50]and HEIM.[51] Since foundation models' utility depends on their own general capabilities and the performance of fine-tuned applications, evaluation must cover both metrics. Proper evaluation examines both a foundation model's downstream applications in aggregate and the direct properties the foundation model holds. To ensure further equity in evaluation, certain existing evaluation frameworks account for all adaptation resources, which leads to more informed analyses for the benefit of all stakeholders.[52] Foundation models' general capabilities allow them to fulfill a unique role in the AI ecosystem,[53]fueled by many upstream and downstream technologies.[1]Training a foundation model requires several resources (e.g. data, compute, labor, hardware, code), with foundation models often involving immense amounts of data and compute (also referred to as computational power). Due to foundation models' large development costs and inexpensive adaptation requirements, the AI landscape has shifted to a small subset of AI companies making foundation models for downstream adaptation.[54]Thus, most foundation model companies outsource this step to specialized data providers (e.g. Scale AI,[55]Surge[56]) and compute providers (e.g.Amazon Web Services,Google Cloud,Microsoft Azure). The foundation model developer itself will then take the data and use the supplied compute to actually train the foundation model. After the foundation model is completely built, much of the data and labor requirements abate. In this development process, hardware and compute are the most necessary, and also the most exclusive resources. To train larger and more complex AI, a sufficient amount of compute is key. However, compute is consolidated in the hands of a few, select entities, which most foundation model developers depend on. As such, the foundation model pipeline is concentrated heavily around these providers. Compute is also costly; in 2023, AI companies spent more than 80% of total capital on compute resources.[58] Foundation models require a large amount of general data to power their capabilities. Early foundation models scraped from subsets of the internet to provide this data information. As the size and scope of foundation models grows, larger quantities of internet scraping becomes necessary, resulting in higher likelihoods of biased or toxic data. This toxic or biased data can disproportionately harm marginalized groups and exacerbate existing prejudices.[59] To address this issue of low-quality data that arose with unsupervised training, some foundation model developers have turned to manual filtering. This practice, known as data labor, comes with its own host of issues.[60]Such manual data detoxification is often outsourced to reduce labor costs, with some workers making less than $2 per hour.[61] The foundation model will then be hosted online either via the developer or via an external organization. Once released, other parties can create applications based on the foundation model, whether through fine-tuning or wholly new purposes. People can then access these applications to serve their various means, allowing one foundation model to power and reach a wide audience. After a foundation model is built, it can be released in one of many ways. There are many facets to a release: the asset itself, who has access, how access changes over time, and the conditions on use.[62]All these factors contribute to how a foundation model will affect downstream applications.[63]In particular, the two most common forms of foundation model release are through APIs and direct model downloads. When a model is released via anAPI, users can query the model and receive responses, but cannot directly access the model itself. Comparatively, the model could be directly downloadable for users to access and modify. Both release strategies are often classified as an open release. The exact definition of an open release is disputed, but widely accepted requirements are provided by theOpen Source Initiative. Some open foundation models are:PaLM 2,Llama 2,Granite, andMistral. While open foundation models can further research and development more easily, they are also more susceptible to misuse. Open foundation models can be downloaded by anyone, and particularly powerful models can be fine-tuned to intentionally or unintentionally cause harm. During a closed release, the foundation model cannot be accessed by the public, but is used internally by an organization. Such releases are considered safer, but offer no additional value to the research community or the public at large. Some foundation models likeGoogle DeepMind's Flamingo[64]are fully closed, meaning they are available only to the model developer; others, such asOpenAI'sGPT-4, are limited access, available to the public but only as ablack box; and still others, such asMeta's Llama 2 are open, with broadly available model weights enabling downstream modification and scrutiny.
https://en.wikipedia.org/wiki/Foundation_model
Artificial general intelligence(AGI)—sometimes calledhuman‑level intelligence AI—is a type ofartificial intelligencethat would match or surpass human capabilities across virtually all cognitive tasks.[1][2] Some researchers argue that state‑of‑the‑artlarge language modelsalready exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved.[3]AGI is conceptually distinct fromartificial superintelligence(ASI), which would outperform the best human abilities across every domain by a wide margin.[4]AGI is considered one of the definitions ofstrong AI. Unlikeartificial narrow intelligence(ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved.[5] Creating AGI is a primary goal of AI research and of companies such asOpenAI,[6]Google,[7]andMeta.[8]A 2020 survey identified 72 active AGIresearch and developmentprojects across 37 countries.[9] The timeline for achieving human‑level intelligence AI remains deeply contested. Recent surveys of AI researchers give median forecasts ranging from the early 2030s to mid‑century, while still recording significant numbers who expect arrival much sooner—or never at all.[10][11][12]There is debate on the exact definition of AGI and regarding whether modernlarge language models(LLMs) such asGPT-4are early forms of AGI.[3]AGI is a common topic inscience fictionandfutures studies.[13][14] Contention exists over whether AGI represents anexistential risk.[15][16][17]Many AI expertshave statedthat mitigating the risk of human extinction posed by AGI should be a global priority.[18][19]Others find the development of AGI to be in too remote a stage to present such a risk.[20][21] AGI is also known as strong AI,[22][23]full AI,[24]human-level AI,[25]human-level intelligent AI, or general intelligent action.[26] Some academic sources reserve the term "strong AI" for computer programs that will experiencesentienceorconsciousness.[a]In contrast, weak AI (or narrow AI) is able to solve one specific problem but lacks general cognitive abilities.[27][23]Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.[a] Related concepts include artificialsuperintelligenceand transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans,[28]while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.[29] A framework for classifying AGI by performance and autonomy was proposed in 2023 byGoogle DeepMindresearchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models likeChatGPTorLLaMA 2to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).[30] Various popular definitions ofintelligencehave been proposed. One of the leading proposals is theTuring test. However, there are other well-known definitions, and some researchers disagree with the more popular approaches.[b] Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:[32] Manyinterdisciplinaryapproaches (e.g.cognitive science,computational intelligence, anddecision making) consider additional traits such asimagination(the ability to form novel mental images and concepts)[33]andautonomy.[34] Computer-based systems that exhibit many of these capabilities exist (e.g. seecomputational creativity,automated reasoning,decision support system,robot,evolutionary computation,intelligent agent). There is debate about whether modern AI systems possess them to an adequate degree.[35] Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:[36] This includes the ability to detect and respond tohazard.[37] Although the ability to sense (e.g.see, hear, etc.) and the ability to act (e.g.move and manipulate objects, change location to explore, etc.) can be desirable for some intelligent systems,[36]these physical capabilities are not strictly required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses. This interpretation aligns with the understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional "eyes and ears".[37]It can be regarded as sufficient for an intelligent computer tointeract with other systems, to invoke or regulate them, to achieve specific goals, including altering a physical environment, asHALin2001: A Space Odysseywas both programmed and tasked to.[38] Several tests meant to confirm human-level AGI have been considered, including:[39][40] The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be expert about machines, must be taken in by the pretence.[43] A problem is informally called "AI-complete" or "AI-hard" if it is believed that in order to solve it, one would need to implement AGI, because the solution is beyond the capabilities of a purpose-specific algorithm.[56] There are many problems that have been conjectured to require general intelligence to solve as well as humans. Examples includecomputer vision,natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.[57]Even a specific task liketranslationrequires a machine to read and write in both languages, follow the author's argument (reason), understand the context (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance. However, many of these tasks can now be performed by modern large language models. According toStanford University's 2024 AI index, AI has reached human-level performance on manybenchmarksfor reading comprehension and visual reasoning.[58] Modern AI research began in the mid-1950s.[59]The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades.[60]AI pioneerHerbert A. Simonwrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."[61] Their predictions were the inspiration forStanley KubrickandArthur C. Clarke's characterHAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneerMarvin Minskywas a consultant[62]on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".[63] Severalclassical AI projects, such asDoug Lenat'sCycproject (that began in 1984), andAllen Newell'sSoarproject, were directed at AGI. However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".[c]In the early 1980s, Japan'sFifth Generation ComputerProject revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".[67]In response to this and the success ofexpert systems, both industry and government pumped money into the field.[65][68]However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.[69]For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all[d]and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".[71] In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such asspeech recognitionandrecommendation algorithms.[72]These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. As of 2018[update], development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.[73] At the turn of the century, many mainstream AI researchers[74]hoped that strong AI could be developed by combining programs that solve various sub-problems.Hans Moravecwrote in 1988: I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and thecommonsense knowledgethat has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphoricalgolden spikeis driven uniting the two efforts.[74] However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on thesymbol grounding hypothesisby stating: The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).[75] The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud[76]in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed byMarcus Hutterin 2000. NamedAIXI, the proposed AGI agent maximises "the ability to satisfy goals in a wide range of environments".[77]This type of AGI, characterized by the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behaviour,[78]was also called universal artificial intelligence.[79] The term AGI was re-introduced and popularized byShane LeggandBen Goertzelaround 2002.[80]AGI research activity in 2006 was described by Pei Wang and Ben Goertzel[81]as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009[82]by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010[83]and 2011[84]at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course on AGI in 2018, organized byLex Fridmanand featuring a number of guest lecturers. As of 2023[update], a small number of computer scientists are active in AGI research, and many contribute to a series of AGI conferences. However, increasingly more researchers are interested in open-ended learning,[85][3]which is the idea of allowing AI to continuously learn and innovate like humans do. As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist.[86]AI pioneerHerbert A. Simonspeculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founderPaul Allenbelieved that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".[87]Writing inThe Guardian, roboticistAlan Winfieldclaimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.[88] A further challenge is the lack of clarity in defining whatintelligenceentails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?[89] Most AI researchers believe strong AI can be achieved in the future, but some thinkers, likeHubert DreyfusandRoger Penrose, deny the possibility of achieving strong AI.[90][91]John McCarthyis among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted.[92]AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.[93][94]Further current AGI progress considerations can be found aboveTests for confirming human-level AGI. A report by Stuart Armstrong and Kaj Sotala of theMachine Intelligence Research Institutefound that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.[95] In 2023,Microsoftresearchers published a detailed evaluation ofGPT-4. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."[96]Another study in 2023 reported that GPT-4 outperforms 99% of humans on theTorrance tests of creative thinking.[97][98] Blaise Agüera y ArcasandPeter Norvigwrote in 2023 that a significant level of general intelligence has already been achieved withfrontier models. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".[99] 2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiplemodalitiessuch as text, audio, and images).[100] In 2024, OpenAI releasedo1-preview, the first of a series of models that "spend more time thinking before they respond". According toMira Murati, this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power.[101][102] AnOpenAIemployee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it's even more clear withO1." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership withMicrosoft, prompting speculation about the company's strategic intentions.[103] Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop.[90]Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress.[90][106][107]For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers ofGPU-enabledCPUs.[108] In the introduction to his 2006 book,[109]Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. As of 2007[update], the consensus in the AGI research community seemed to be that the timeline discussed byRay Kurzweilin 2005 inThe Singularity is Near[110](i.e. between 2015 and 2045) was plausible.[111]Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.[112] In 2012,Alex Krizhevsky,Ilya Sutskever, andGeoffrey Hintondeveloped a neural network calledAlexNet, which won theImageNetcompetition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers).[113]AlexNet was regarded as the initial ground-breaker of the currentdeep learningwave.[113] In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.[114][115] In 2020,OpenAIdevelopedGPT-3, a language model capable of performing many diverse tasks without specific training. According toGary Grossmanin aVentureBeatarticle, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.[116] In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.[117] In 2022,DeepMinddevelopedGato, a "general-purpose" system capable of performing more than 600 different tasks.[118] In 2023,Microsoft Researchpublished a study on an early version of OpenAI'sGPT-4, contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.[3] In 2023, AI researcherGeoffrey Hintonstated that:[119] The idea that this stuff could actually get smarter than people – a few people believed that, [...]. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that. He estimated in 2024 (with low confidence) that systems smarter than humans could appear within 5 to 20 years and stressed the attendant existential risks.[120] In May 2023,Demis Hassabissimilarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years.[121]In March 2024,Nvidia's CEO,Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans.[122]In June 2024, the AI researcherLeopold Aschenbrenner, a formerOpenAIemployee, estimated AGI by 2027 to be "strikingly plausible".[123] While the development oftransformermodels like inChatGPTis considered the most promising path to AGI,[124][125]whole brain emulationcan serve as an alternative approach. With whole brain simulation, a brain model is built byscanningandmappinga biological brain in detail, and then copying and simulating it on a computer system or another computational device. Thesimulationmodel must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain.[126]Whole brain emulation is a type ofbrain simulationthat is discussed incomputational neuroscienceandneuroinformatics, and for medical research purposes. It has been discussed inartificial intelligenceresearch[111]as an approach to strong AI.Neuroimagingtechnologies that could deliver the necessary detailed understanding are improving rapidly, andfuturistRay Kurzweilin the bookThe Singularity Is Near[110]predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it. For low-level brain simulation, a very powerful cluster of computers or GPUs would be required, given the enormous quantity ofsynapseswithin thehuman brain. Each of the 1011(one hundred billion)neuronshas on average 7,000 synaptic connections (synapses) to other neurons. The brain of a three-year-old child has about 1015synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014to 5×1014synapses (100 to 500 trillion).[128]An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014(100 trillion) synaptic updates per second (SUPS).[129] In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016computations per second (cps).[e](For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate currentsupercomputers– then 1016"computations" would be equivalent to 10petaFLOPS,achieved in 2011, while 1018wasachieved in 2022.) He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued. TheHuman Brain Project, anEU-funded initiative active from 2013 to 2023, has developed a particularly detailed and publicly accessibleatlasof the human brain.[132]In 2023, researchers from Duke University performed a high-resolution scan of a mouse brain. Theartificial neuronmodel assumed by Kurzweil and used in many currentartificial neural networkimplementations is simple compared withbiological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biologicalneurons, presently understood only in broad outline. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition, the estimates do not account forglial cells, which are known to play a role in cognitive processes.[133] A fundamental criticism of the simulated brain approach derives fromembodied cognitiontheory which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning.[134][135]If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel[111]proposes virtual embodiment (like inmetaverseslikeSecond Life) as an option, but it is unknown whether this would be sufficient. In 1980, philosopherJohn Searlecoined the term "strong AI" as part of hisChinese roomargument.[136]He proposed a distinction between two hypotheses about artificial intelligence:[f] The first one he called "strong" because it makes astrongerstatement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.[137] In contrast to Searle and mainstream AI, some futurists such asRay Kurzweiluse the term "strong AI" to mean "human level artificial general intelligence".[110]This is not the same as Searle'sstrong AI, unless it is assumed thatconsciousnessis necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers the question is out-of-scope.[138] Mainstream AI is most interested in how a programbehaves.[139]According toRussellandNorvig, "as long as the program works, they don't care if you call it real or a simulation."[138]If the program can behaveas ifit has a mind, then there is no need to know if itactuallyhas mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[138]Thus, for academic AI research, "Strong AI" and "AGI" are two different things. Consciousness can have various meanings, and some aspects play significant roles in science fiction and theethics of artificial intelligence: These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals.[144]Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights.[145]Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.[146] AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems.[147] AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer.[148]It could take care of the elderly,[149]and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education.[149]The need to work to subsist couldbecome obsoleteif the wealth produced is properlyredistributed.[149][150]This also raises the question of the place of humans in a radically automated society. AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such asnanotechnologyorclimate engineering, while avoiding the associated risks.[151]If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if theVulnerable World Hypothesisturns out to be true),[152]it could take measures to drastically reduce the risks[151]while minimizing the impact of these measures on our quality of life. Advancements in medicine and healthcare AGI would improve healthcare by making medical diagnostics faster, cheaper, and more accurate. AI-driven systems can analyse patient data and detect diseases at an early stage.[153]This means patients will get diagnosed quicker and be able to seek medical attention before their medical condition gets worse. AGI systems could also recommend personalised treatment plans based on genetics and medical history.[154] Additionally, AGI could accelerate drug discovery by simulating molecular interactions, reducing the time it takes to develop new medicines for conditions like cancer and Alzheimer's.[155]In hospitals, AGI-powered robotic assistants could assist in surgeries, monitor patients, and provide real-time medical support. It could also be used in elderly care, helping aging populations maintain independence through AI-powered caregivers and health-monitoring systems. By evaluating large datasets, AGI can assist in developing personalised treatment plans tailored to individual patient needs. This approach ensures that therapies are optimised based on a patient's unique medical history and genetic profile, improving outcomes and reducing adverse effects.[156] Advancements in science and technology AGI can become a tool for scientific research and innovation. In fields such as physics and mathematics, AGI could help solve complex problems that require massive computational power, such as modeling quantum systems, understanding dark matter, or proving mathematical theorems.[157]Problems that have remained unsolved for decades may be solved with AGI. AGI could also drive technological breakthroughs that could reshape society. It can do this by optimising engineering designs, discovering new materials, and improving automation. For example, AI is already playing a role in developing more efficient renewable energy sources and optimising supply chains in manufacturing.[158]Future AGI systems could push these innovations even further. Enhancing education and productivity AGI can personalize education by creating learning programs that are specific to each student's strengths, weaknesses, and interests. Unlike traditional teaching methods, AI-driven tutoring systems could adapt lessons in real-time, ensuring students understand difficult concepts before moving on.[159] In the workplace, AGI could automate repetitive tasks, freeing up workers for more creative and strategic roles.[158]It could also improve efficiency across industries by optimising logistics, enhancing cybersecurity, and streamlining business operations. If properly managed, the wealth generated by AGI-driven automation could reduce the need for people to work for a living. Working may become optional.[160] Mitigating global crises AGI could play a crucial role in preventing and managing global threats. It could help governments and organizations predict and respond to natural disasters more effectively, using real-time data analysis to forecast hurricanes, earthquakes, and pandemics.[161]By analyzing vast datasets from satellites, sensors, and historical records, AGI could improve early warning systems, enabling faster disaster response and minimising casualties. In climate science, AGI could develop new models for reducing carbon emissions, optimising energy resources, and mitigating climate change effects. It could also enhance weather prediction accuracy, allowing policymakers to implement more effective environmental regulations. Additionally, AGI could help regulate emerging technologies that carry significant risks, such as nanotechnology and bioengineering, by analysing complex systems and predicting unintended consequences.[157]Furthermore, AGI could assist in cybersecurity by detecting and mitigating large-scale cyber threats, protecting critical infrastructure, and preventing digital warfare. Revitalising environmental conservation and biodiversity AGI could significantly contribute to preserving the environment and protecting endangered species. By analyzing satellite imagery, climate data, and wildlife patterns, AGI systems could identify environmental threats earlier and recommend targeted conservation strategies.[162]AGI could help optimize land use, monitor illegal activities like poaching or deforestation in real-time, and support global efforts to restore ecosystems. Advanced predictive models developed by AGI could also assist in reversing biodiversity loss, ensuring the survival of critical species and maintaining ecological balance.[163] AGI could revolutionize humanity’s ability to explore and settle beyond Earth. With its advanced problem-solving skills, AGI could autonomously manage complex space missions, including navigation, resource management, and emergency response. It could accelerate the design of life support systems, habitats, and spacecraft optimized for extraterrestrial environments. Furthermore, AGI could support efforts to colonize planets like Mars by simulating survival scenarios and helping humans adapt to new worlds, dramatically expanding the possibilities for interplanetary civilization.[164] AGI may represent multiple types ofexistential risk, which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".[165]The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench it, preventingmoral progress.[166]Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.[167][168]There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe.[169][170]Considering how much AGI could improve humanity's future and help reduce other existential risks,Toby Ordcalls these existential risks "an argument for proceeding with due caution", not for "abandoning AI".[167] The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such asElon Musk,Bill Gates,Geoffrey Hinton,Yoshua Bengio,Demis HassabisandSam Altman.[171][172] In 2014,Stephen Hawkingcriticized widespread indifference: So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.[173] The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as a collateral damage from human activities.[174] The skepticYann LeCunconsiders that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intents as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards".[175]On the other side, the concept ofinstrumental convergencesuggests that almost whatever their goals,intelligent agentswill have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions.[176] Many scholars who are concerned about existential risk advocate for more research into solving the "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in afriendly, rather than destructive, manner after it reaches superintelligence?[177][178]Solving the control problem is complicated by theAI arms race(which could lead to arace to the bottomof safety precautions in order to release products before competitors),[179]and the use of AI in weapon systems.[180] The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short-term, or that concerns about AGI distract from other issues related to current AI.[181]FormerGooglefraud czarShuman Ghosemajumderconsiders that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.[182] Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God.[183]Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products.[184][185] In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[172] Researchers from OpenAI estimated that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted".[186][187]They consider office workers to be the most exposed, for example mathematicians, accountants or web designers.[187]AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies. According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed:[150] Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality Elon Musk believes that the automation of society will require governments to adopt auniversal basic income.[188]
https://en.wikipedia.org/wiki/Artificial_general_intelligence
Andy and Bill's law, occasionally known asThe Great Moore's Law Compensator,[1]is the assertion that new software will tend to consume any increase in computing power that new hardware can provide. The law originates from a humorous one-liner told in the 1990s during computing conferences: "what Andy giveth, Bill taketh away." The phrase is a riff upon the business strategies of formerIntelCEOAndy Groveand formerMicrosoftCEOBill Gates.[2]Intel and Microsoft had entered into a lucrative partnership in the 1980s through to the 1990s, and Intelchipsetsbecame thede factostandard forPCsrunningMicrosoft Windows, giving way to the term "Wintel". Despite this profitable arrangement, Grove felt that Gates was not making full use of the powerful capabilities of Intel chips and that Gates was in fact refusing to upgrade his software to achieve optimum hardware performance.[3]Grove's frustration with the dominance of Microsoft software over Intel hardware became public, spawning the humorous catchphrase, and later, the law. In later years, the law has also been stated "what Intel giveth, Microsoft taketh away," foregoing themetonymyof the original.[1] This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law
Incomputer programming,code bloatis the production ofprogram code(source codeormachine code) that is unnecessarily long, slow, or otherwise wasteful of resources. Code bloat can be caused by inadequacies in theprogramming languagein which the code is written, thecompilerused to compile it, or theprogrammerwriting it. Thus, while code bloat generally refers to source code size (as produced by the programmer), it can be used to refer instead to thegeneratedcode size or even thebinary filesize. The following JavaScript algorithm has a large number ofredundantvariables, unnecessary logic and inefficient string concatenation. The same logic can be stated more efficiently as follows: The difference incode densitybetween variouscomputer languagesis so great that often lessmemoryis needed to hold both a program written in a "compact" language (such as adomain-specific programming language,Microsoft P-Code, orthreaded code), plus aninterpreterfor that compact language (written in native code), than to hold that program written directly innative code. Some techniques for reducing code bloat include:[1]
https://en.wikipedia.org/wiki/Code_bloat
Feature creepis the excessive ongoing expansion or addition of newfeaturesin a product,[1]especially incomputer software,video games(where it should not be confused withpower creep) andconsumer and business electronics. These extra features go beyond the basic function of the product and can result insoftware bloatand over-complication, rather than simple design. The definition of what qualifies as "feature creep" varies amongend users, where what is perceived as such by some users may be considered practical functionality by others.[2]Feature creep is one of the most common sources ofcostand schedule overruns.[3][verification needed]It thus endangers and can even kill products and projects. Feature creep may arise from the desire to provide the consumer with a more useful or desirable product in order to increase sales or distribution. Once a product does everything that it is designed to do, the manufacturer may add functions some users might consider unneeded (sometimes at the cost of efficiency) or continue with the original version (at the cost of a perceived lack of improvement). Feature creep may also arise as a result ofcompromise from a committeeimplementing several different viewpoints oruse casesin the same product, even for opportunistic reasons.[4]As more features are added to support each approach, cross-conversion features between the multiple paradigms may further complicate the total features. There are several methods to control feature creep, including: strict limits for allowable features, multiple variations, and pruning excess features. Later feature creep may be avoided by basing initial design on strong software fundamentals, such as logical separation of functionality and data access, e.g. using submenus that are optionally accessible bypower userswho desire more functionality and a higher verbosity of information. It can be actively controlled with rigorouschange managementand by delaying changes to later delivery phases of a project.[5] Another method of controlling feature creep is maintaining multiple variations of products, where features are limited and reduced in the more basic variations, e.g.Microsoft Windowseditions. For softwareuser interfaces, viewing modes or operation modes can be used (e.g. basic mode or expert mode), between which the users can select to match their own needs. Both in manygraphical user interfacesandcommand-line interfaces, users are able to opt in for a higher verbosity manually. In the latter case, in many command-line programs, adding a-vor--verboseoption manually, does show more detailed information that might be less relevant to minimal users, but useful to power users or for debugging and troubleshooting purposes. Because the ever-growing, ever-expanding addition of new features might exceed available resources, a minimal core "basic" version of a product can be maintained separately, to ensure operation in smaller operating environments. Using the "80/20 rule", the more basic product variations might fulfill the needs of the majority (e.g. ~80%) of the users, so they would not be subjected to the complexity (or extra expense) of features requested by the advanced 20% of users. The extra features are still available, but optional and ready to be utilized for those who solicit them, but they have not been implemented into the basic versions of the products. Another solution for feature creep ismodularity. Power users who require more functionality can retrofit needed features by downloading software modules,plug-ins, add-ons (also known as add-ins) and custom themes to match their personal requirements. At some point, the cost of maintaining a particular subset of features might become prohibitive, and pruning can be used. A new product version can omit the extra features, or perhaps a transition period would be used, where old features weredeprecatedbefore eventual removal from the system. If there are multiple variations of products, then some of them might be phased out of use. One major example is theSamsung Galaxy S6, released March 2015, of which significantly many software/menu features and also some hardware features were pruned. A “more functional” variation of it hasn't been released.[citation needed] Occasionally, uncontrolled feature creep can lead to products that surpass the scope of what was originally intended; this is known asscope creep. A common consequence of feature creep is the delay or cancellation of a product, which may become more expensive than was originally intended.[citation needed] Often, a reasonably feature-complete software project, or one with moderate amounts of feature creep, can survive and even thrive through many iterations, but its successor release may suffer substantial delays once a decision is taken to rewrite the whole code base in addition to introducing new technologies. For example, Microsoft'sWindows Vistawas planned to be a minor release betweenWindows XPand its successor codenamedWindows "Blackcomb"(released as Windows 7), but after adapting more and more features from Blackcomb (many of which were eventually cancelled), Vista turned out to become a major release which took five years of development. A similar fate was suffered byNetscape 6, which was originally supposed to beNetscape 5. The 1998 decision by Netscape Communications to open-source its Netscape Navigator browser and Communicator Internet suite (both code-named Mozilla) soon made it obvious that the underlying code was too difficult, and required a complete rewrite of Mozilla, which fostered the creation of theMozilla application framework. This caused significant delays, Netscape 5 was skipped, and the company was purchased by AOL. The subsequent release of Netscape 6.00 in 2000 was widely criticized as alpha-level code, and the project reached stability by Netscape 6.1 in 2001, three years after the decision to rework the Internet suite. By that time, Microsoft's Internet Explorer browser had long-eclipsed Netscape in usage share, which had diminished to single digits. Even after reaching stability and attaining some necessary new features, the open-sourceMozilla Application Suite(then named just Mozilla), on which AOL built Netscape, was viewed as "bloated". Just a year later, a group of Mozilla developers decided to separate the browser component, which eventually becameFirefox. Double Fine Adventures'KickstarterprojectBroken Ageis another example of a project being delayed by feature creep. Originally supposed to have a release date of October 2012, the first half of the game was released in January 2014 while the second half followed late April 2015, and required two separate funding rounds to complete.[6] Feature creep combined with short deadlines will often lead to a"hacky solution". The desired change may be large enough to warrant a redesign of the existing project foundation, but deadline pressure instead requires developers to rush and put out a less-refined product. Thespoonerism"feeping creaturism" was coined to emphasize a developer's dislike of this situation,[7]personifying the scope-crept product as "a misshapen creature of hacks ... prowling about in the dark",[8]and the harbinger of more creep to come.[9]("Feeping" is a jargon synonym of "beeping".)[10]
https://en.wikipedia.org/wiki/Feature_creep
Incomputing,minimalismrefers to the application ofminimalistphilosophies and principles in the design and use ofhardwareandsoftware. Minimalism, in this sense, means designing systems that use the least hardware and software resources possible. In the late 1970s and early 1980s, programmers worked within the confines of relatively expensive and limitedresourcesof common platforms. Eight or sixteenkilobytesofRAMwas common; 64 kilobytes was considered a vast amount and was the entireaddress spaceaccessible to the8-bitCPUs predominant during the earliest generations ofpersonal computers. The most common storage medium was the 5.25 inchfloppy diskholding from 88 to 170 kilobytes. Hard drives with capacities from five to tenmegabytescost thousands of dollars. Over time, personal-computer memory capacities expanded by orders of magnitude and mainstream programmers took advantage of the added storage to increase their software's capabilities and to make development easier by usinghigher-level languages. By contrast,system requirementsforlegacy softwareremained the same. As a result, even the most elaborate, feature-rich programs of yesteryear seem minimalist in comparison with current software. One example of a program whose system requirements once gave it a heavyweight reputation is theGNU Emacstext editor, which gained thebackronym"Eight Megabytes And Constantly Swapping" in an era when 8 megabytes was a lot of RAM.[1]Today, Emacs' mainly textualbuffer-based paradigm uses far fewer resources thandesktop metaphorGUIIDEswith comparable features such asEclipseorNetbeans.[citation needed]In a speech at the 2002 International Lisp Conference,Richard Stallmanindicated that minimalism was a concern in his development ofGNUand Emacs, based on his experiences withLispand system specifications of low-endminicomputersat the time.[2] As the capabilities and system requirements of common desktop software and operating systems grew throughout the 1980s and 1990s, and as software development became dominated by teams espousing conflicting, faddishsoftware development methodologies, some developers adopted minimalism as a philosophy and chose to limit their programs to a predetermined size or scope.[3]A focus onsoftware optimizationcan result in minimalist software, as programmers reduce the number of operations their program carries out in order to speed execution.[4] In the early 21st century, new developments in computing have brought minimalism to the forefront. In what has been termed thepost-PC erait is no longer necessary to buy a high-end personal computer merely to perform common computing tasks.[5]Mobile computingdevices, such assmartphones,tablet computers,netbooksandplug computers, often have smaller memory capacities, less-capable graphics subsystems, and slower processors when compared to the personal computer they are expected to replace. In addition, heavy use of graphics effects likealpha blendingdrains the battery faster than a "flat ui".[6]The growing popularity of these devices has made minimalism an important design concern. Google'sChrome browserandChromeOSare often cited as examples of minimalist design.[7][8] Another example isWindows 8, whereMicrosoftimplemented the "simple, squared-off"Metroappearance, which was less graphics-intensive than the previousAerointerface used inWindows 7andWindows Vista. This change was made in part because of the rise of smaller, battery-powered devices and the need to conserve power.[9][10][11]Version 7 ofApple'siOSmade similar changes foruser experiencereasons.[12] Developers may createuser interfacesto be as simple as possible by eliminatingbuttonsanddialog boxesthat may potentially confuse the user. Minimalism is sometimes used in itsvisual arts meaning, particularly in theindustrial designof the hardware device orsoftware theme. Some developers have attempted to create programs to perform a particular function in the fewest lines of code, or smallest compiled executable size possible on a given platform.[13][14]SomeLinuxdistributions mention minimalism as a goal.Alpine,Arch,Puppy,Bodhi,CrunchBang,dynebolic[15]andTiny Coreare examples. The early development of theUnixsystem occurred on low-powered hardware, andDennis RitchieandKen Thompsonhave stated their opinion that this constraint contributed to the system's "elegance of design".[16] Programming languagedesigners can create minimal programming languages by eschewingsyntactic sugarand extensivelibrary functions. Such languages may beTuring tarpitsdue to not offering standard support for common programming tasks. Creating a minimal Lispinterpreteris a common learning task set beforecomputer sciencestudents.[17]TheLambda calculus, developed byAlonzo Churchis a minimal programming language that uses only function definitions and function applications.[18][19]Scheme,[20][21]Forth,[22]andGo[23][24]are cited as examples of practical, minimal programming languages. The programming hobby ofcode golfresults in minimalist software,[25]but these are typically exercises orcode poetry, not usable applications software. John Millar Carroll, in his bookMinimalism Beyond theNürnberg Funnelpointed out that the use of minimalism results in "instant-use" devices such as video games,ATMs,voting machines, andmall kioskswith little-or-nolearning curvethat do not require the user to read manuals.[26]User Interface researchers have performed experiments suggesting that minimalism, as illustrated by the design principles ofparsimonyandtransparency, bolsters efficiency and learnability.[27]Minimalism is implicit in theUnix philosophiesof "everything is a text stream" and "do one thing and do it well", although modern Unix/Linux distributions do not hold so rigorously to this philosophy.[28]
https://en.wikipedia.org/wiki/Minimalism_(computing)
"No Silver Bullet—Essence and Accident in Software Engineering" is a widely discussed paper onsoftware engineeringwritten byTuring AwardwinnerFred Brooksin 1986.[1]Brooks argues that "there is no single development, in either technology or management technique, which by itself promises even oneorder of magnitude[tenfold] improvement within a decade in productivity, in reliability, in simplicity." He also states that "we cannot expect ever to see two-fold gains every two years" in software development, as there is in hardware development (Moore's law). Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. This is related toAristotle'sclassification. Accidental complexity relates to problems that engineers create and can fix. For example, modernprogramming languageshave abstracted away the details of writing and optimizingassembly languagesource codeand eliminated the delays caused bybatch processing, though other sources of accidental complexity remain. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things. Brooks claims that accidental complexity has decreased substantially, and today's programmers spend most of their time addressing essential complexity. Brooks argues that this means shrinking all the accidental activities to zero will not give the same order-of-magnitude improvement as attempting to decrease essential complexity. While Brooks insists that there is no onesilver bullet, he believes that a series of innovations attacking essential complexity could lead to significant improvements. One technology that had made significant improvement in the area of accidental complexity was the invention ofhigh-level programming languages, such asAda.[1] Brooks advocates "growing" software organically through incremental development. He suggests devising and implementing the main and subprograms right at the beginning, filling in the working sub-sections later. He believes thatcomputer programmingthis way excites the engineers and provides a working system at every stage of development. Brooks goes on to argue that there is a difference between "good" designers and "great" designers. He postulates that as programming is a creative process, some designers are inherently better than others. He suggests that there is as much as a tenfold difference between an ordinary designer and a great one. He then advocates treating star designers equally well as star managers, providing them not just with equalremuneration, but also all the perks of higher status: large office, staff, travel funds, etc. The article, and Brooks's later reflections on it, "'No Silver Bullet' Refired", can be found in the anniversary edition ofThe Mythical Man-Month.[2] Brooks's paper has sometimes been cited in connection withWirth's law, to argue that "software systems grow faster in size and complexity than methods to handle complexity are invented."[3]
https://en.wikipedia.org/wiki/No_Silver_Bullet
Parkinson's lawcan refer to either of two observations, published in 1955 by the naval historianC. Northcote Parkinsonas an essay inThe Economist:[1] The first paragraph of the essay mentioned the first meaning above as a "commonplace observation", and the rest of the essay was devoted to the latter observation, terming it "Parkinson's Law". The first-referenced meaning of the law – "Work expands to fill the available time" – has sprouted severalcorollaries, the best known being the Stock-Sanford corollary to Parkinson's law: If you wait until the last minute, it only takes a minute to do.[2] the Asimov corollary to Parkinson's law: In ten hours a day you have time to fall twice as far behind your commitments as in five hours a day.[3] as well as corollaries relating tocomputers, such as: Data expands to fill the space available for storage.[4] This was the main focus of the essay byCyril Northcote Parkinson, published inThe Economistin 1955,[1][5]and reprinted with other similar essays in the successful 1958 bookParkinson's Law: The Pursuit of Progress.[6]The book was translated into many languages. It was highly popular in the Soviet Union and its sphere of influence.[7]In 1986,Alessandro Nattacomplained about the swelling bureaucracy in Italy.Mikhail Gorbachevresponded that "Parkinson's law works everywhere."[8] Parkinson derived the dictum from his extensive experience in theBritish Civil Service. He gave, as examples, the growth in the size of the BritishAdmiraltyandColonial Officeeven though the numbers of their ships and colonies were declining. Much of the essay is dedicated to a summary of purportedly scientific observations supporting the law, such as the increase in the number of employees at theColonial Officewhile theBritish Empiredeclined (he showed that it had its greatest number of staff when it was folded into theForeign Officedue to a lack of colonies to administer). He explained this growth using two forces: (1) "An official wants to multiply subordinates, not rivals", and (2) "Officials make work for each other." He noted that the number employed in abureaucracyrose by 5–7% per year "irrespective of any variation in the amount of work (if any) to be done". Parkinson presented the growth as a mathematical equation describing the rate at whichbureaucraciesexpand over time, with the formulax=(2km+P)/n{\displaystyle x=(2k^{m}+P)/n}, in whichkwas the number of officials wanting subordinates,mwas the hours they spent writingminutesto each other. Observing that the promotion of employees necessitated the hiring of subordinates, and that time used answering minutes requires more work; Parkinson states: "In any public administrative department not actually at war the staff increase may be expected to follow this formula" (for a given year)[1] x=2km+Pn{\displaystyle x={\frac {2k^{m}+P}{n}}} In a different essay included in the book, Parkinson proposed a rule about the efficiency of administrative councils. He defined a "coefficient of inefficiency" with the number of members as the main determining variable. This is a semi-humorous attempt to define the size at which a committee or other decision-making body becomes completely inefficient. InParkinson's Law: The Pursuit of Progress, London: John Murray, 1958a chapter is devoted to the basic question of what he calledcomitology: how committees, government cabinets, and other such bodies are created and eventually grow irrelevant (or are initially designed as such). (The wordcomitologyhas recently been independently invented by the European Union for a different non-humorous meaning.)[9][10] Empirical evidence is drawn from historical and contemporary government cabinets. Most often, the minimal size of a state's most powerful and prestigious body is five members. From English history, Parkinson notes a number of bodies that lost power as they grew: A detailed mathematical expression is proposed by Parkinson for the coefficient of inefficiency, featuring many possible influences. In 2008, an attempt was made to empirically verify the proposed model.[11]Parkinson's conjecture that membership exceeding a number "between 19.9 and 22.4" makes a committee manifestly inefficient seems well justified by the evidence proposed[citation needed]. Less certain is the optimal number of members, which must lie between three (a logical minimum) and 20. (Within a group of 20,individual discussionsmay occur,diluting the power of the leader.) That it may be eight seems arguable but is not supported by observation: no contemporary government in Parkinson's data set had eight members, and only kingCharles I of Englandhad a Committee of State of that size.
https://en.wikipedia.org/wiki/Parkinson%27s_law
Software bloatis a process whereby successive versions of acomputer programbecome perceptibly slower, use more memory,disk spaceor processing power, or have higher hardware requirements than the previous version, while making only dubious user-perceptible improvements or suffering fromfeature creep. The term is not applied consistently; it is often used as a pejorative byend users, including to describe undesireduser interfacechanges even if those changes had little or no effect on the hardware requirements. In long-lived software, bloat can occur from the software servicing a large, diverse marketplace with many differing requirements. Most end users will feel they only need some limited subset of the available functions, and will regard the others as unnecessary bloat, even if end users with different requirements require those functions. Actual (measurable) bloat can occur due to de-emphasisingalgorithmic efficiencyin favour of other concerns like developer productivity, or possibly through the introduction of new layers of abstraction like avirtual machineor otherscripting enginefor the purposes of convenience when developer constraints are reduced. The perception of improved developer productivity, in the case of practising development within virtual machine environments, comes from the developers no longer taking resource constraints and usage into consideration during design and development; this allows the product to be completed faster but it results in increases to the end user's hardware requirements and/or compromised performance as a result. The term "bloatware" is also used to describe unwantedpre-installed softwareorbundled programs.[1] In computer programming, code bloat refers to the presence of program code (source code or machine code) that is unnecessarily long, slow, or otherwise wasteful of resources. Feature creep is the tendency for a product, project, or system to gradually expand beyond its original scope by adding more and more features over time. In the context of computer programs, this can result in excessive disk space usage, loss of performance and the program being harder to use and understand by most end users. Software developersinvolved in the industry during the 1970s had severe limitations on processing power, disk space and memory. Everybyteandclock cyclewas taken into account, and much work went into fitting the programs into available resources. Achieving this efficiency was one of the highest values of computer programmers, and the best programs were often called "elegant", a term used by mathematicians to describe a proof which is tidy, parsimonious and powerful. By the 21st century, the situation had reversed. Resources were perceived as cheap, and rapidity of coding and headline features for marketing seen as priorities.[2]In part, this is because technological advances have since increased processing capacity and storage density by orders of magnitude, while reducing the relative costs by similar orders of magnitude (seeMoore's law). Additionally, the spread of computers through all levels of business and home life has produced a software industry many times larger than it was in the 1970s. Programs are now usually churned out by teams, directed by committees in software development studios (also known as software houses or software factories) where each programmer works on only a part of the whole, on one or moresubroutines.[citation needed] Finally, software development tools and approaches often result in changes throughout a program to accommodate each feature, leading to a large-scale inclusion of code which affects the main operation of the software, and is required in order to support functions that themselves may be only rarely used. In particular, the advances in resources available have led to tools which allow easier development of code, again with less priority given to end efficiency. Another cause of bloat is independently competing standards and products, which can create a demand for integration. There are now more operating systems, browsers, protocols, and storage formats than there were before, causing bloat in programs due to interoperability issues. For example, a program that once could only save in text format is now required to save in HTML, XML, XLS, CSV, PDF, DOC, and other formats. Niklaus Wirthhas summed up the situation inWirth's law, which states that software speed is decreasing more quickly than hardware speed is increasing. In his 2001 essayStrategy Letter IV: Bloatware and the 80/20 Myth,[3]Joel Spolskyargues that while 80% of the users only use 20% of the features (a variant on thePareto principle), each one uses different features. Thus, "lite" software editions turn out to be useless for most, as they miss the one or two special features that are present in the "bloated" version. Spolsky sums the article with a quote byJamie Zawinskireferring to theMozilla Application Suite(ultimately debloated as separate apps, of which theFirefoxweb browser is the only significant survivor): "Convenient though it would be if it were true, Mozilla is not big because it's full of useless crap. Mozilla is big because your needs are big. Your needs are big because the Internet is big. There are lots of small, lean web browsers out there that, incidentally, do almost nothing useful. [...] But being a shining jewel of perfection was not a goal when we wrote Mozilla."[4] Software bloat may also be a symptom of thesecond-system effect, described byFred BrooksinThe Mythical Man-Month. “Bloatware" is software that has become bloated through inefficiency or accretion of features as outlined above.[3]The term is also commonly used forpreinstalled softwarebundled on a device, usually by the hardware manufacturer, that is mostly unwanted by the purchaser. The term may also be applied to the accumulation of unwanted and unused software elements that remain after partial and incompleteuninstallation. These elements may include whole programs, libraries, associated configuration information, or other data. Performance may deteriorate overall as a result of such remnants, as the unwanted software or software components can occupy both hard disk memory and RAM, waste processing time, add diskI/O, and cause delays at system startup and shutdown. In the worst cases, the leftover software may interfere with the correct operation of wanted software.[5] OnAndroiddevices some of the bloatware can be hidden from a user account withADB, although this doesn't remove the application and will still take disk space, it won't run and slow down the system.[6][7]Byunlocking the bootloader, users can remove the bloatware's files, install a custom firmware or gain root privileges which allows the app to be fully uninstalled.[8] Apple'siTuneshas been accused of being bloated by efforts to turn it from a simple media player to an e-commerce and advertising platform,[18][19]with formerPC Worldeditor Ed Bott accusing the company of hypocrisy in its advertising attacks on Windows for similar practices.[20]In 2019, Apple announced the impending closure of the program, a move described by a commentator fromThe Guardianas being "long overdue", stating that the program had "become baroquely bloated, a striking anomaly for a company that prides itself on elegant and functional design."[21] Microsoft Windowshas also been criticized as being bloated – with reference toWindows Vistaand discussing the new, greatly slimmed downWindows 7core components, Microsoft engineerEric Trautcommented that "This is the core ofWindows 7. This is a collection of components that we've taken out. A lot of people think of Windows as this really large, bloated operating system, and that may be a fair characterization, I have to admit. It is large. It contains a lot of stuff in it. But at its core, the kernel and the components that make up the very core of the operating system actually is pretty streamlined."[22][23]Ed Bott also expressed skepticism, noting that nearly every operating system that Microsoft has ever sold has been criticized as "bloated" on first release, even those now regarded as the exact opposite, such asMS-DOS.[24]Quoting Paul Thurrott, Bott agreed that the bloat stems from numerous enterprise-level features included in the operating system that were largely irrelevant to the average home user. CD- and DVD-burning applications such asNero Burning ROMhave become criticized for being bloated.[25]Superfluous features not specifically tailored to the end user are sometimes installed by default through express setups. A number of technology blogs have also covered the issue of increased bloatware on cell phones. However, they refer to a different issue, specifically that of wireless carriers loading phones with software that, in many cases, cannot be easily, if at all, deleted. This has been most frequently cited with respect toAndroiddevices, although this phenomenon exists on phones running many other operating systems.[26][27] Some of the most popular currentmessaging apps, which were previously only focused oninstant messaging, have been criticized for being bloated due to feature creep.[28][29][30][31]WeChatduring its transformation into asuper-appadded additional features such as games, subscription services,WeChat Paye-wallet,[28]anews aggregator, ane-commercehub, ane-government[29]feature, a cinema booking system, a restaurant finder and aridesharing company,[31]which has increased the size of the app from 2 MB in 2011 to 58 MB in 2018.[citation needed]Facebook Messenger, which has been separated from theFacebookapp, is similarly criticized for adding additional features such as games, bots and features which copied fromSnapchatsuch as Messenger Day (Stories), face filters, a camera with the ability to edit photos, doodle draw and addedemojisand stickers.[32][33]In January 2018, the head of Facebook Messaging,David A. Marcus, admitted that the app itself is extremely bloated and promised to redesign the whole app to remove unnecessary features and streamline it.[30]The redesigned and streamlinedFacebookMessenger app was announced in October 2018, in which its features are reduced to messaging, stories, discover tab and camera.[34] Some applications, such asGIMP, and software with additional functionality fromplug-ins, use extensions or add-ons which are downloaded separately from the main application. These can be created by either the software developer or by third-party developers. Plug-ins, extensions, and add-ons add extra functionality which might have otherwise been packaged in the main program. Allowing these plug-ins, extensions, and/or add-ons reduces the space used on any one machine, because even though the application, the "plug-in interface", and all the plug-ins combined are larger than the same functionality compiled into one monolithic application, it allows each user to install only the particular add-on features they require, rather than forcing every user to install a much larger monolithic application that includes all of the available features. This results in a "stripped-down" or "out-of-the-box" application that is delivered in a compact package yet is ready for users to add any missing functionality. Open source softwaremay use a similar technique usingpreprocessor directivesto include features at compile time selectively. This is easier to implement and more secure than a plugin system, but has the disadvantage that a user who wants a specific set of features must compile the program from source. Sometimes software becomes bloated because of "creeping featurism"[35](Zawinski's law of software envelopment). One way to reduce that kind of bloat is described by theUnix philosophyof "writing programs that do one thing and do it well," and breaking what would be a single, complicated piece of software into numerous simpler components which can be chained together usingpipes,shell scripts, or other forms ofinterapplication communication. Software bloat may induce morevulnerabilitiesdue to raise of difficulty in managing a large number of code and dependencies. Furthermore, it may make software developer difficulty to understand the code they ship, increasing the difficulty for spot and fix vulnerabilities.[36][37] Although bloatware is not a form ofmalwareand is not designed for malicious purposes, bloatware may introduce some vulnerabilities unintentionally and may cause the user's computer to have a higher risk for infection bycomputer virusesorransomware.[38][39]
https://en.wikipedia.org/wiki/Software_bloat
Wasteare unwanted or unusable materials. Waste is any substance discarded after primary use, or is worthless, defective and of no use. Aby-product, by contrast is ajoint productof relatively minoreconomic value. A waste product may become a by-product, joint product orresourcethrough aninventionthat raises a waste product's value above zero. Examples includemunicipal solid waste(household trash/refuse),hazardous waste,wastewater(such assewage, which containsbodily wastes(fecesandurine) andsurface runoff),radioactive waste, and others. What constitutes waste depends on the eye of the beholder; one person's waste can be a resource for another person.[1]Though waste is a physical object, its generation is a physical and psychological process.[1]The definitions used by various agencies are as below. According to theBasel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposalof 1989, Art. 2(1),"'Wastes' are substance or objects, which are disposed of or are intended to be disposed of or are required to be disposed of by the provisions of national law".[2] TheUNSDGlossary of Environment Statistics[3]describes waste as "materials that are not prime products (that is, products produced for the market) for which the generator has no further use in terms of his/her own purposes of production, transformation orconsumption, and of which he/she wants to dispose. Wastes may be generated during theextractionofraw materials, the processing of raw materials into intermediate and final products, the consumption of final products, and other human activities. Residuals recycled or reused at the place of generation are excluded." Under theWaste Framework Directive 2008/98/EC, Art. 3(1), theEuropean Uniondefines waste as "an object the holder discards, intends to discard or is required to discard."[4]For a more structural description of the Waste Directive, see theEuropean Commission's summary. Metabolic wastesor excrements aresubstancesleft over frommetabolicprocesses (such ascellular respiration) which cannot be used by theorganism(they are surplus ortoxic), and must therefore beexcreted. This includesnitrogencompounds,water,CO2,phosphates,sulphates, etc.Animalstreat these compounds as excretes.Plantshavemetabolic pathwayswhich transforms some of them (primarily the oxygen compounds) into useful substances. The Organization for Economic Co-operation and Development also known as OECD defines municipal solid waste (MSW) as "waste collected and treated by or for municipalities".[6]Typically this type of waste includeshousehold waste,commercial waste, and demolition or construction waste. In 2018, theEnvironmental Protection Agencyconcluded that 292.4 tons of municipal waste was generated which equated to about 4.9 pounds per day per person. Out of the 292.4 tons, approximately 69 million tons were recycled, and 25 million tons were composted.[7] Household waste more commonly known as trash or garbage are items that are typically thrown away daily from ordinary households. Items often included in this category include product packaging,yard waste, clothing, food scraps, appliance, paints, and batteries.[8]Most of the items that are collected by municipalities end up inlandfillsacross the world. In the United States, it is estimated that 11.3 million tons of textile waste is generated. On an individual level, it is estimated that the average American throws away 81.5 pounds of clothes each year.[9]As online shopping becomes more prevalent, items such as cardboard, bubble wrap, shipping envelopes are ending up in landfills across the United States. The EPA has estimated that approximately 10.1 million tons of plastic containers and packaging ended up landfills in 2018. The EPA noted that only 30.5% of plastic containers and packaging was recycled or combusted as an energy source. Additionally, approximately 940,000 pounds of cardboard ends up in the landfill each year.[10] Commercial waste is very similar to household waste. To be considered as commercial waste, it must come from a business or commercial occupancy. This can be restaurants, retail occupants, manufacturing occupants or similar businesses. Typically, commercial waste contains similar items such as food scraps, cardboard, paper, and shipping materials.[11]Generally speaking, commercial waste creates more waste than household waste on a per location basis. The EPA defines this type of waste as "Construction and Demolition (C&D) debris is a type of waste that is not included in municipal solid waste (MSW)."[12]Items typically found in C&D include but are not limited to steel, wood products, drywall and plaster, brick and clay tile, asphalt shingles, concrete, and asphalt. Generally speaking, construction and demolition waste can be categorized as any components needed to build infrastructures. In 2018, the EPA estimated that the US generated approximately 600 million tons ofC&D waste.[12]The waste generated by construction and demolition is often intended to be reused or is sent to the landfill. Examples of reused waste is milled asphalt can be used again for the asphalt mixture or fill dirt can be used to level grade. The EPA defines hazardous waste as "a waste with properties that make it dangerous or capable of having a harmful effect on human health or the environment."[13]Hazardous Wastefalls under theResource Conservation and Recovery Act (RCRA).  Under theRCRA, the EPA has the authority to control hazardous waste during its entire lifecycle.[14]This means from the point of creation to the point where it has been properly disposed of. The life cycle of hazardous waste includes generation, transportation, treatment, and storage and disposal. All of which are included in the RCRA. Some forms of hazardous waste includeradioactive waste, explosive waste, andelectronic waste. Radioactive waste, often referred to asnuclear waste, is produced by various industries such asnuclear power plants,nuclear reactors, hospitals, research centers, and mining facilities. Any activity that involves radioactive material can generate radioactive waste.[15]Furthermore, such waste emits radioactive particles, which if not handled correctly, can be both anenvironmental hazardas well as a human health hazard.[15]When dealing with radioactive waste, it is extremely important to understand the necessary protocols and follow the correct precautions. Failure to handle andrecyclethese materials can have catastrophic consequences and potentially damage the site's ecosystems for years to come.[15] Radioactive waste is monitored and regulated by multiple governmental agencies such asNuclear Regulatory Commission(NRC),Department of Energy(DOE),Environmental Protection Agency(EPA),Department of Transportation(DOT), andDepartment of the Interior(DOI).  Each agency plays an important role in creating, handling, and properly disposing of radioactive waste. A brief description of each agency's role can be found below. NRC:"Licenses and regulates the receipt and possession of high-level waste at privately owned facilities and at certain DOE facilities."[16] DOE:"Plans and carries out programs for sand handling of DOE-generated radioactive wastes, develops waste disposal technologies, and will design, construct and operate disposal facilities for DOE-generated and commercial high-level wastes."[16] EPA:"Develops environmental standards and federal radiation protection guidance for offsite radiation due to the disposal of spent nuclear fuel and high-level and transuranic radioactive wastes."[16] DOT:"Regulates both the packaging and carriage of all hazardous materials including radioactive waste."[16] DOI:"Through the U.S. Geological Survey, conducts laboratory and field geologic investigations in support of DOE's waste disposal programs and collaborates with DOE on earth science technical activities."[16] The US currently defines five types of radioactive waste, as shown below. High-level Waste:This type of radioactive waste is generated from nuclear reactors or reprocessing spent nuclear fuel.[15] Transuranic Waste: This type of radioactive waste is man-made and has an atomic number of 92 or higher.[15] Uranium or thorium mill tailings:This type of radioactive waste is a result after the mining or milling or uranium or thorium ore.[15] Low-level waste:This type of radioactive waste is radioactively contaminated waste. It is typically generated from industrial processes or research. Examples of these items include paper, protective clothing, bags, and cardboard.[15] Technologically enhanced naturally-occurring radioactive material (TENORM):This type of radioactive waste is created through human activity such as mining, oil and gas drilling, and water treatment where naturally-occurring radiological material (NORM) becomes concentrated.[15] The EPA defines energetic hazardous waste as "wastes that have the potential to detonate and bulk military propellants which cannot safely be disposed of through other modes of treatments."[17]The items which typically fall under this category includemunitions, fireworks, flares, hobby rockets, and automobile propellants. Munitions were added to hazardous waste in 1997 when the EPA finalized RCRA. A special rule was added to address munitions in waste. This new rule is commonly referred to as the Military Munitions Rule.[17]The EPA defines military munitions as "all types of both conventional and chemical ammunition products and their components, produced by or for the military for national defense and security (including munitions produced by other parties under contract to or acting as an agent for DOD—in the case of Government Owned/Contractor Operated [GOCO] operations)."[17]While a large percentage of munitions waste is generated by the government or governmental contractors, residents also throw away expired or faulty ammunition inside their household waste. Every year, the US generates this type of waste from both the commercial and consumer aspects. This waste is often generated from fireworks, signal flares and hobby rockets which have been damaged, failed to operate or for other reasons. Due to their chemical properties, these types of devices are extremely dangerous. While automobile airbag propellants are not as common as munitions andfireworks, they share similar properties which makes them extremely hazardous. Airbag propellants characteristics of reactivity and ignitability are the characteristics which qualify for hazardous waste. When disposed undeployed, leaves these two hazardous characteristics intact. To properly dispose of these items, they must be safely deployed which removes these hazardous characteristics.[18] The EPA includes the waste of automobile airbag propellants under the RCRA. In 2018, the EPA issued a final rule on handling of automobile airbag propellants. The "interim final rule"provides an exemption of entities which install and remove airbags. This includes automobile dealerships, salvage yards, automobile repair facilities and collision centers. The handler and transporter are exempt from RCRA, but the airbag waste collection facility is not exempt. Once the airbags have met the collection center, it will then be classified as RCRA hazardous waste and must be disposed or recycled at a RCRA disposal facility.[18] Electronic waste, often referred to as "E-Waste" or "E-Scrap," are often thrown away or sent to a recycler. E-Waste continues to end up in landfills across the world. The EPA estimates that in 2009, 2.37 million tons of televisions, computers, cell phones, printers, scanners, and fax machines were discarded by US consumers. Only 25% of these devices were recycled; the remainder ended up in landfills across the US. E-Waste contains many elements that can be recycled or re-used. Typically speaking, electronics are encased in a plastic or light metal enclosure. Items such as computer boards, wiring,capacitors, and small motor items are common types of E-waste. Of these items, the internal components includeiron,gold,palladium,platinum, andcopper, all of which are mined from the earth. It requires energy to operate the equipment to mine these metals, which emitsgreenhouse gasesinto the atmosphere. Donating e-waste to recycling centers or refurbishing this equipment can reduce the greenhouse gases emitted through the mining process as well as decrease the use of natural resources to ensure future generations will have sufficient access to these resources. As this issue continued to grow,President Obamaestablished the Interagency Task Force on Electronics Stewardship in November 2010. The overall goal for this task was to develop a national strategy for handling and proper disposal of electronic waste. The task force would work with theWhite House Council on Environmental Quality(CEQ), EPA, and theUS General Services Administration(GSA). The task force released its final product, theNational Strategy for Electronics Stewardship report. The report focuses on four goals of the federal government's plan to enhance the management of electronics:[19] 1.     Incentivizing greener design of electronics 2.     Leading by example 3.     Increasing domestic recycling 4.     Reducing harmful exports of e-waste and building capacity in developing countries.[19] E-Waste is not only a problem in the US, but also a global issue. Tackling this issue requires collaboration from multiple agencies across the world. Some agencies involved in this include U.S. EPA, Taiwan Environmental Protection Administration (Taiwan EPA), International E-Waste Management Network (IEMN), and environmental offices fromAsia,Latin America, theCaribbean,Africa, andNorth America.[20] Mixed waste is a term that has different definitions based on its context. Most commonly, mixed waste refers to hazardous waste which contains radioactive material. In this context, the management of mixed waste is regulated by the EPA and RCRA andAtomic Energy Act. The hazardous materials content is regulated by RCRA while the radiological component is regulated by the Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). Mixed waste can also be defined as a type of waste which includes recyclable materials and organic materials.[21]Some examples of mixed waste in this context include a combination of broken glassware, floor sweepings, non-repairable household goods, non-recyclable plastic and metal, clothing, and furnishings. Additionally, ashes, soot, and residential renovation waste materials are also included under this definition.[21] This type of waste is typically generated fromhospitals, physicians' offices,dental practices,blood banks,veterinaryoffices, andresearch facilities. This waste has often been contaminated withbodily fluidsfrom humans or animals. Examples of this type of contamination can includeblood,vomit,urine, and other bodily fluids. Concerns started to generate when medical waste was appearing on east coast beaches in the 1980s. This forced congress to pass theMedical Waste Tracking Act. This act was only in effect for approximately 3 years after the EPA concluded the "disease-causing medical waste was greatest at the point of generation and naturally tapers off after that point."[22] Prior to the Hospital Medical Infectious Waste Incinerator (HMIWI) standard, approximately 90% of the infectious waste was incinerated before 1997. Due to the potential of negatively affect air quality, alternative treatment and disposal technologies for medical waste was developed. These new alternatives include: There are many issues that surround reporting waste. It is most commonly measured by size or weight, and there is a stark difference between the two. For example,organic wasteis much heavier when it is wet, and plastic or glass bottles can have different weights but be the same size.[23]On a global scale it is difficult to report waste because countries have different definitions of waste and what falls into waste categories, as well as different ways of reporting. Based on incomplete reports from its parties, theBasel Conventionestimated 338 million tonnes of waste was generated in 2001.[24]For the same year, OECD estimated 4 billion tonnes from its member countries.[25]Despite these inconsistencies, waste reporting is still useful on a small and large scale to determine key causes and locations, and to find ways of preventing, minimizing, recovering, treating, and disposing of waste. Inappropriately managed waste can attractrodentsandinsects, which can harbor gastrointestinal parasites,yellow fever, worms, various diseases, and other conditions for humans, and exposure to hazardous wastes, particularly when they are burned, can cause various other diseases including cancers.[26]Toxic wastematerials can contaminate surface water, groundwater, soil, and air, which causes more problems for humans, other species, andecosystems.[27]A form of waste disposal involvingcombustioncreates a significant amount ofgreenhouse gases. When the burned waste contains metals, it can createtoxic gases. On the other hand, when the waste contains plastics, the gases produce containCO2.[28]As global warming and CO2emissions increase, soil begins to become a largercarbon sinkand will become increasingly valuable for plant life.[29] Waste management is a significantenvironmental justiceissue. Many of the environmental burdens cited above are more often borne by marginalized groups, such as racial minorities, women, and residents of developing nations.NIMBY(not in my back yard) is the opposition of residents to a proposal for a new development because it is close to them.[30]However, the need for expansion and siting of waste treatment and disposal facilities is increasing worldwide. There is now a growing market in the transboundary movement of waste, and although most waste that flows between countries goes between developed nations, a significant amount of waste is moved from developed to developing nations.[31] The economic costs of managing waste are high, and are often paid for bymunicipal governments;[32]money can often be saved with more efficiently designed collection routes, modifying vehicles, and with public education. Environmental policies such aspay as you throwcan reduce the cost of management and reduce waste quantities. Waste recovery (that is,recycling,reuse) can curb economic costs because it avoids extracting raw materials and often cuts transportation costs. "Economic assessment of municipal waste management systems – case studies using a combination oflife-cycle assessment(LCA) andlife-cycle costing(LCC)".[33]The location of waste treatment and disposal facilities often reduces property values due to noise, dust, pollution, unsightliness, and negative stigma. The informal waste sector consists mostly ofwaste pickerswho scavenge for metals, glass, plastic, textiles, and other materials and then trade them for a profit. This sector can significantly alter or reduce waste in a particular system, but other negative economic effects come with the disease, poverty, exploitation, and abuse of its workers.[34] People in developing countries suffer fromcontaminated waterand landfills caused by unlawful government policies that allowfirst-world countriesand companies to transport their trash to their homes and oftentimes near bodies of water. Those same governments do not use anywaste tradeprofits to create ways to manage landfills or clean water sources. Photographer Kevin McElvaney[35]documents the world's biggest e-waste dump calledAgbogbloshieinAccra, Ghana, which used to be awetland. The young men and children that work in Agbogbloshie smash devices to get to the metals, obtain burns, eye damage, lung and back problems, chronic nausea, debilitating headaches, and respiratory problems and most workers die fromcancerin their 20s (McElvaney).[35]In McElvaney's photos, kids in fields burning refrigerators and computers with blackened hands and trashed clothes and animals, such as cows with open wounds, in the dumpsite. There are piles of waste used as makeshift bridges over lakes, with metals and chemicals just seeping into the water andgroundwaterthat could be linked to homes' water systems. The same unfortunate situation and dumps/landfills can be seen in similar countries that are considered the third world, such as other West African countries andChina. Many are advocating for waste management, a stop to the waste trade, the creation of wastewater treatment facilities, and providing a clean and accessible water source. The health of all these people in landfills and water are human necessities/rights that are being taken away.[35] Waste managementor waste disposal includes the processes and actions required to manage waste from its inception to its finaldisposal.[36]This includes thecollection,transport,treatment, and disposal of waste, together with monitoring and regulation of the waste management process and waste-relatedlaws, technologies, and economic mechanisms. Waste can either besolid,liquid, orgasesand each type has different methods of disposal and management. Waste management deals with all types of waste, includingindustrial,chemical,municipal,organic,biomedical, andradioactive wastes. In some cases, waste can pose a threat to human health.[37]Health issues are associated with the entire process of waste management. Health issues can also arise indirectly or directly: directly through the handling of solid waste, and indirectly through the consumption of water, soil, and food.[37]Waste is produced by human activity, for example, the extraction and processing of raw materials.[38]Waste management is intended to reduce the adverse effects of waste on humanhealth, theenvironment, planetary resources, andaesthetics. The aim of waste management is to reduce the dangerous effects of such waste on the environment and human health. A big part of waste management deals withmunicipal solid waste, which is created by industrial, commercial, and household activity.[39] Waste management practices are not the same across countries (developedanddeveloping nations); regions (urbanandrural areas), andresidentialandindustrialsectors can all take different approaches.[40] Proper management of waste is important for building sustainable and liveable cities, but it remains a challenge for many developing countries and cities. A report found that effective waste management is relatively expensive, usually comprising 20%–50% of municipal budgets. Operating this essential municipal service requires integrated systems that are efficient, sustainable, and socially supported.[41]A large portion of waste management practices deal withmunicipal solid waste(MSW) which is the bulk of the waste that is created by household, industrial, and commercial activity.[42]According to theIntergovernmental Panel on Climate Change(IPCC), municipal solid waste is expected to reach approximately 3.4 Gt by 2050; however, policies and lawmaking can reduce the amount of waste produced in different areas and cities of the world.[43]Measures of waste management include measures for integrated techno-economic mechanisms[44]of acircular economy, effective disposal facilities, export and import control[45][46]and optimalsustainable designof products that are produced. In the firstsystematic reviewof the scientific evidence around global waste, its management, and its impact on human health and life, authors concluded that about a fourth of all the municipal solid terrestrial waste is not collected and an additional fourth is mismanaged after collection, often being burned in open and uncontrolled fires – or close to one billion tons per year when combined. They also found that broad priority areas each lack a "high-qualityresearchbase", partly due to the absence of "substantialresearch funding", which motivated scientists often require.[47][48]Electronic waste (ewaste) includes discarded computer monitors, motherboards, mobile phones and chargers, compact discs (CDs), headphones, television sets, air conditioners and refrigerators. According to the Global E-waste Monitor 2017, India generates ~ 2 million tonnes (Mte) of e-waste annually and ranks fifth among the e-waste producing countries, after theUnited States, thePeople's Republic of China,JapanandGermany.[49] Wastewater treatment facilitiesremovepollutantsandcontaminantsphysically and chemically to clean water to be returned to society. The South Gippsland Water Organization breaks down the three steps of waste-water treatment. The primary treatment is to sift through the water to remove large solids to leave oils and small particles in the water. Secondary treatment to dissolve/remove oils, particles, and micro-organisms from the water to be prepared for tertiary treatment to chemically disinfect the water withchlorineor withUV light. “For most industrial applications, a 150,000 GPD capacity WWTS would cost an estimated $500,000 to $1.5 million inclusive of all necessary design, engineering, equipment, installation, and startup”.[51]With such a simple solution that has been proven to clean water to be reused and is relatively inexpensive, there is no excuse why there should not be a waste-water treatment facility in every country, every state, and every town. “Right now, according to aNASA-led study, many of the world’s freshwater sources are being drained faster than they are being replenished. Thewater tableis dropping all over the world. There's not an infinite supply of water”.[52]There is a need to preserve every resource, every finite water source that we do have left to maintain our lives and lifestyles. Able countries helping under-developed countries with their creation of wastewater treatments benefits society. Another cost of not adding wastewater treatments in countries is that people have no choice but to clean with, cook with, or drink the contaminated water which has caused millions of cases of disease and deaths. “Between 400,000 and 1 million people die each year in developing countries because of diseases caused by mismanaged waste, estimates poverty charity Tearfund”.[53]Society has the means to decrease or even eliminate this way of death and save millions of lives by providing the simple human necessity of clean water. Resource recoveryis using wastes as an input material to create valuable products as new outputs. The aim is to reduce the amount of waste generated, thereby reducing the need forlandfillspace, and optimising the values created from waste.[54]Resource recovery delays the need to useraw materialsin the manufacturing process. Materials found inmunicipal solid waste,constructionanddemolitionwaste,[55]commercial waste and industrial wastes can be used to recover resources for themanufacturingof new materials and products.Plastic,paper,aluminium,glassandmetalare examples of where value can be found in waste.[citation needed] Resource recovery goes further than just themanagement of waste. Resource recovery is part of acircular economy, in which the extraction ofnatural resourcesand generation of wastes are minimised, and in which materials and products aredesignedmore sustainably for durability,reuse,repairability,remanufacturingandrecycling.[56]Life-cycle analysis(LCA) can be used to compare the resource recovery potential of different treatment technologies. Energy recoveryfrom waste is using non-recyclable waste materials and extracting from it heat, electricity, or energy through a variety of processes, includingcombustion,gasification,pyrolyzation, andanaerobic digestion.[58]This process is referred to aswaste-to-energy. There are several ways to recover energy from waste.Anaerobic digestionis a naturally occurring process ofdecompositionwhereorganic matteris reduced to a simpler chemical component in the absence ofoxygen.[58]Incinerationor direct controlled burning of municipalsolid wastereduces waste and makesenergy. Secondary recovered fuel is the energy recovery from waste that cannot be reused or recycled from mechanical and biological treatment activities.[58]Pyrolysisinvolves heating of waste, with the absence of oxygen, to high temperatures to break down anycarboncontent into a mixture of gaseous and liquid fuels and solid residue.[58]Gasificationis the conversion of carbon rich material through high temperature with partial oxidation into a gas stream.[58]Plasma archeating is the very high heating of municipal solid waste to temperatures ranging from 3,000 to 10,000 °C, where energy is released by an electrical discharge in aninert atmosphere.[58] Using waste as fuel can offer important environmental benefits. It can provide a safe andcost-effectiveoption for wastes that would normally have to be dealt with through disposal.[58]It can help reducecarbon dioxideemissions by diverting energy use from fossil fuels, while also generating energy and using waste as fuel can reduce themethane emissionsgenerated in landfills by averting waste from landfills.[58] There is some debate in the classification of certain biomass feedstock as wastes. Crude Tall Oil (CTO), a co-product of thepulp and papermakingprocess, is defined as a waste or residue in some European countries when in fact it is produced “on purpose” and has significant value add potential in industrial applications. Several companies use CTO to produce fuel,[59]while the pine chemicals industry maximizes it as a feedstock “producing low-carbon, bio-based chemicals” through cascading use.[60] Educationandawarenessin the area of waste andwaste managementis increasingly important from a global perspective ofresource management. TheTalloires Declarationis a declaration forsustainabilityconcerned about the unprecedented scale and speed of environmentalpollutionanddegradation, and thedepletionofnatural resources. Local, regional, and globalairpollution; accumulation and distribution of toxic wastes; destruction and depletion of forests,soil, andwater; depletion of theozone layerand emission of "green house" gases threaten the survival of humans and thousands of other living species, the integrity of the earth and itsbiodiversity, the security of nations, and the heritage of future generations. Several universities have implemented the Talloires Declaration by establishingenvironmental managementand waste management programs, e.g. the waste management university project.Universityandvocationaleducation are promoted by various organizations, e.g.WAMITABandChartered Institution of Wastes Management.
https://en.wikipedia.org/wiki/Waste
Electronic design automation(EDA), also referred to aselectronic computer-aided design(ECAD),[1]is a category ofsoftware toolsfor designingelectronic systemssuch asintegrated circuitsandprinted circuit boards. The tools work together in adesign flowthat chip designers use to design and analyze entiresemiconductorchips. Since a modernsemiconductorchip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect tointegrated circuits(ICs). The earliest electronic design automation is attributed toIBMwith the documentation of its700 seriescomputers in the 1950s.[2] Prior to the development of EDA,integrated circuitswere designed by hand and manually laid out.[3]Some advanced shops used geometric software to generate tapes for aGerberphotoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era wasCalma, whoseGDSIIformat is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the firstplacement and routingtools were developed; as this occurred, the proceedings of theDesign Automation Conferencecatalogued the large majority of the developments of the time.[3] The next era began following the publication of "Introduction toVLSISystems" byCarver MeadandLynn Conwayin 1980,[4]and is considered the standard textbook for chip design.[5]The result was an increase in the complexity of the chips that could be designed, with improved access todesign verificationtools that usedlogic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools have evolved, this general approach of specifying the desired behavior in a textual programming language and letting the tools derive the detailed physical design remains the basis of digital IC design today. The earliest EDA tools were produced academically. One of the most famous was the "Berkeley VLSI Tools Tarball", a set ofUNIXutilities used to design early VLSI systems. Widely used were theEspresso heuristic logic minimizer,[6]responsible for circuit complexity reductions andMagic,[7]a computer-aided design platform. Another crucial development was the formation ofMOSIS,[8]a consortium of universities and fabricators that developed an inexpensive way to train student chip designers by producing real integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC processes and pack a large number of projects perwafer, with several copies of chips from each project remaining preserved. Cooperating fabricators either donated the processed wafers or sold them at cost, as they saw the program as helpful to their own long-term growth. 1981 marked the beginning of EDA as an industry. For many years, the larger electronic companies, such asHewlett-Packard,TektronixandIntel, had pursued EDA internally, with managers and developers beginning to spin out of these companies to concentrate on EDA as a business.Daisy Systems,Mentor GraphicsandValid Logic Systemswere all founded around this time and collectively referred to as DMV. In 1981, theU.S. Department of Defenseadditionally began funding ofVHDLas a hardware description language. Within a few years, there were many companies specializing in EDA, each with a slightly different emphasis. The first trade show for EDA was held at theDesign Automation Conferencein 1984 and in 1986,Verilog, another popular high-level design language, was first introduced as a hardware description language byGateway Design Automation. Simulators quickly followed these introductions, permitting direct simulation of chip designs and executable specifications. Within several years, back-ends were developed to performlogic synthesis. Current digital flows are extremely modular, with front ends producing standardized design descriptions that compile into invocations of units similar to cells without regard to their individual technology. Cells implement logic or other electronic functions via the utilisation of a particular integrated circuit technology. Fabricators generally provide libraries of components for their production processes, with simulation models that fit standard simulation tools. Most analog circuits are still designed in a manual fashion, requiring specialist knowledge that is unique to analog design (such as matching concepts).[9]Hence, analog EDA tools are far less modular, since many more functions are required, they interact more strongly and the components are, in general, less ideal. EDA for electronics has rapidly increased in importance with the continuous scaling ofsemiconductortechnology.[10]Some users arefoundryoperators, who operate thesemiconductor fabricationfacilities ("fabs") and additional individuals responsible for utilising the technology design-service companies who use EDA software to evaluate an incoming design for manufacturing readiness. EDA tools are also used for programming design functionality intoFPGAsor field-programmable gate arrays, customisable integrated circuit designs. Design flow primarily remains characterised via several primary components; these include: Market capitalizationand company name as of March 2023: Market capitalization and company name as of December 2011[update]:[19] Many EDA companies acquire small companies with software or other technology that can be adapted to their core business.[24]Most of the market leaders are amalgamations of many smaller companies and this trend is helped by the tendency of software companies to design tools as accessories that fit naturally into a larger vendor's suite of programs ondigital circuitry; many new tools incorporate analog design and mixed systems.[25]This is happening due to a trend to placeentire electronic systems on a single chip.
https://en.wikipedia.org/wiki/Electronic_design_automation
Integrated circuit design,semiconductor design,chip designorIC design, is a sub-field ofelectronics engineering, encompassing the particularlogicandcircuit designtechniques required to designintegrated circuits(ICs). An IC consists of miniaturizedelectronic componentsbuilt into anelectrical networkon a monolithicsemiconductorsubstrate byphotolithography. IC design can be divided into the broad categories ofdigitalandanalogIC design. Digital IC design is to produce components such asmicroprocessors,FPGAs, memories (RAM,ROM, andflash) and digitalASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design andRFIC design. Analog IC design is used in the design ofop-amps,linear regulators,phase locked loops,oscillatorsandactive filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, andresistance. Fidelity of analog signal amplification and filtering is usually critical, and as a result analog ICs use larger area active devices than digital designs and are usually less dense in circuitry.[1] Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. Therulesfor what can and cannot be manufactured are also extremely complex. Common IC processes of 2015 have more than 500 rules. Furthermore, since the manufacturing process itself is not completely predictable, designers must account for itsstatisticalnature. The complexity of modern IC design, as well as market pressure to produce designs rapidly, has led to the extensive use ofautomated design toolsin the IC design process. The design of some processors has become complicated enough to be difficult to fully test, and this has caused problems at large cloud providers.[2]In short, the design of an IC usingEDA softwareis the design, test, and verification of the instructions that the IC is to carry out. Artificial Intelligence has been demonstrated in chip design for creating chip layouts which are the locations of standard cells and macro blocks in a chip.[3] Integrated circuit design involves the creation of electronic components, such astransistors,resistors,capacitorsand theinterconnectionof these components onto a piece of semiconductor, typicallysilicon. A method to isolate the individual components formed in thesubstrateis necessary since the substrate silicon is conductive and often forms an active region of the individual components. The two common methods arep-n junction isolationanddielectric isolation. Attention must be given to power dissipation of transistors and interconnect resistances and current density of the interconnect,contacts and viassince ICs contain very tiny devices compared to discrete components, where such concerns are less of an issue.Electromigrationin metallic interconnect andESDdamage to the tiny components are also of concern. Finally, the physical layout of certain circuit subblocks is typically critical, in order to achieve the desired speed of operation, to segregate noisy portions of an IC from quiet portions, to balance the effects of heat generation across the IC, or to facilitate theplacementof connections to circuitry outside the IC. A typical IC design cycle involves several steps: Focused ion beamsmay be used during chip development to establish new connections in a chip.[4][5] Roughly saying, digital IC design can be divided into three parts. Note that the second step, RTL design, is responsible for the chip doing the right thing. The third step, physical design, does not affect the functionality at all (if done correctly) but determines how fast the chip operates and how much it costs. A standard cell normally represents a singlelogic gate, a diode or simple logic components such as flip-flops, or logic gates with multiple inputs.[6]The use of standard cells allows the chip's design to be split into logical and physical levels. A fabless company would normally only work on the logical design of a chip, determining how cells are connected and the functionality of the chip, while following design rules from the foundry the chip will be made in, while the physical design of the chip, the cells themselves, are normally done by the foundry and it comprises the physics of the transistor devices and how they are connected to form a logic gate. Standard cells allow chips to be designed and modified more quickly to respond to market demands, but this comes at the cost of lower transistor density in the chip and thus larger die sizes.[6] Foundries supply libraries of standard cells to fabless companies, for design purposes and to allow manufacturing of their designs using the foundry's facilities. AProcess design kit(PDK) may be provided by the foundry and it may include the standard cell library as well as the specifications of the cells, and tools to verify the fabless company's design against the design rules specified by the foundry as well as simulate it using the foundry's cells. PDKs may be provided under non-disclosure agreements. Macros/Macrocells/Macro blocks,[7]Macrocell arraysand IP blocks have greater functionality than standard cells, and are used similarly. There are soft macros and hard macros. Standard cells are usually placed following standard cell rows. Theintegrated circuit(IC) development process starts with defining product requirements, progresses through architectural definition, implementation, bringup and finally production. The various phases of the integrated circuit development process are described below. Although the phases are presented here in a straightforward fashion, in reality there isiterationand these steps may occur multiple times. Before anarchitecturecan be defined some high level product goals must be defined. Therequirementsare usually generated by a cross functional team that addressesmarket opportunity, customer needs,feasibility, and much more. This phase should result in aproduct requirements document. Thearchitecturedefines the fundamental structure, goals and principles of the product. It defines high level concepts and the intrinsic value proposition of the product. Architecture teams take into account many variables and interface with many groups. People creating the architecture generally have a significant amount of experience dealing with systems in the area for which the architecture is being created. The work product of the architecture phase is an architecturalspecification. The micro-architecture is a step closer to the hardware. It implements the architecture and defines specific mechanisms and structures for achieving that implementation. The result of the micro-architecture phase is a micro-architecture specification which describes the methods used to implement the architecture. In the implementation phase the design itself is created using the micro-architectural specification as the starting point. This involves low leveldefinitionand partitioning, writingcode, entering schematics and verification. This phase ends with adesignreachingtapeout. After a design is created, taped-out and manufactured, actual hardware, 'first silicon', is received which is taken into the lab where it goes throughbringup. Bringup is the process of powering, testing and characterizing the design in the lab. Numeroustestsare performed starting from very simple tests such as ensuring that the device will power on to much more complicated tests which try to stress the part in various ways. The result of the bringup phase is documentation ofcharacterization data(how well the part performs to spec) and errata (unexpected behavior). Productization is the task of taking a design from engineering into mass production manufacturing. Although a design may have successfully met the specifications of the product in the lab during the bringup phase there are many challenges that product engineers face when trying to mass-produce those designs. TheICmust be ramped up to production volumes with an acceptable yield. The goal of the productization phase is to reach mass production volumes at an acceptable cost. Once a design is mature and has reached mass production it must be sustained. The process must be continually monitored and problems dealt with quickly to avoid a significant impact on production volumes. The goal of sustaining is to maintain production volumes and continually reduce costs until the product reachesend of life. The initial chip design process begins with system-level design and microarchitecture planning. Within IC design companies, management and often analytics will draft a proposal for a design team to start the design of a new chip to fit into an industry segment. Upper-level designers will meet at this stage to decide how the chip will operate functionally. This step is where an IC's functionality and design are decided. IC designers will map out the functional requirements, verification testbenches, and testing methodologies for the whole project, and will then turn the preliminary design into a system-level specification that can be simulated with simple models using languages like C++ and MATLAB and emulation tools. For pure and new designs, the system design stage is where anInstruction setand operation is planned out, and in most chips existing instruction sets are modified for newer functionality. Design at this stage is often statements such asencodes in theMP3formatorimplementsIEEE floating-point arithmetic. At later stages in the design process, each of these innocent looking statements expands to hundreds of pages of textual documentation. Upon agreement of a system design, RTL designers then implement the functional models in a hardware description language likeVerilog,SystemVerilog, orVHDL. Using digital design components like adders, shifters, and state machines as well as computer architecture concepts like pipelining, superscalar execution, andbranch prediction, RTL designers will break a functional description into hardware models of components on the chip working together. Each of the simple statements described in the system design can easily turn into thousands of lines ofRTLcode, which is why it is extremely difficult to verify that the RTL will do the right thing in all the possible cases that the user may throw at it. To reduce the number of functionality bugs, a separate hardware verification group will take the RTL and design testbenches and systems to check that the RTL actually is performing the same steps under many different conditions, classified as the domain offunctional verification. Many techniques are used, none of them perfect but all of them useful – extensivelogic simulation,formal methods,hardware emulation,lint-like code checking,code coverage, and so on. Verification such as that done by emulators can be carried out in FPGAs or special processors,[8][9]and emulation replaced simulation. Simulation was initially done by simulating logic gates in chips but later on, RTLs in chips were simulated instead.[10]Simulation is still used when creating analog chip designs.[11]Prototyping platforms are used to run software on prototypes of the chip design while it is under development using FPGAs but are slower to iterate on or modify and can't be used to visualize hardware signals as they would appear in the finished design.[12] A tiny error here can make the whole chip useless, or worse. The famousPentium FDIV bugcaused the results of a division to be wrong by at most 61 parts per million, in cases that occurred very infrequently. No one even noticed it until the chip had been in production for months. YetIntelwas forced to offer to replace, for free, every chip sold until they could fix the bug, at a cost of $475 million (US).[citation needed] RTL is only a behavioral model of the actual functionality of what the chip is supposed to operate under. It has no link to a physical aspect of how the chip would operate in real life at the materials, physics, and electrical engineering side. For this reason, the next step in the IC design process,physical designstage, is to map the RTL into actual geometric representations of all electronics devices, such as capacitors, resistors, logic gates, and transistors that will go on the chip. The main steps of physical design are listed below. In practice there is not a straightforward progression - considerable iteration is required to ensure all objectives are met simultaneously. This is a difficult problem in its own right, calleddesign closure. Before the advent of the microprocessor and software based design tools, analog ICs were designed using hand calculations and process kit parts. These ICs were low complexity circuits, for example,op-amps, usually involving no more than ten transistors and few connections. An iterative trial-and-error process and "overengineering" of device size was often necessary to achieve a manufacturable IC. Reuse of proven designs allowed progressively more complicated ICs to be built upon prior knowledge. When inexpensive computer processing became available in the 1970s, computer programs were written to simulate circuit designs with greater accuracy than practical by hand calculation. The first circuit simulator for analog ICs was calledSPICE(Simulation Program with Integrated Circuits Emphasis). Computerized circuit simulation tools enable greater IC design complexity than hand calculations can achieve, making the design of analogASICspractical. As many functional constraints must be considered in analog design, manual design is still widespread today, in contrast to digital design which is highly automated, including automated routing and synthesis.[14]As a result, modern design flows for analog circuits are characterized by two different design styles – top-down and bottom-up.[15]The top-down design style makes use of optimization-based tools similar to conventional digital flows. Bottom-up procedures re-use “expert knowledge” with the result of solutions previously conceived and captured in a procedural description, imitating an expert's decision.[15]An example are cell generators, such asPCells. A challenge most critical to analog IC design involves the variability of the individual devices built on the semiconductor chip. Unlike board-level circuit design which permits the designer to select devices that have each been tested and binned according to value, the device values on an IC can vary widely which are uncontrollable by the designer. For example, some IC resistors can vary ±20% and β of an integratedBJTcan vary from 20 to 100. In the latest CMOS processes, β of vertical PNP transistors can even go below 1. To add to the design challenge, device properties often vary between each processed semiconductor wafer. Device properties can even vary significantly across each individual IC due to dopinggradients. The underlying cause of this variability is that many semiconductor devices are highly sensitive to uncontrollable random variances in the process. Slight changes to the amount of diffusion time, uneven doping levels, etc. can have large effects on device properties. Some design techniques used to reduce the effects of the device variation are:[16] The three largest companies sellingelectronic design automationtools areSynopsys,Cadence, andMentor Graphics.[17]
https://en.wikipedia.org/wiki/Integrated_circuit_design
Network architectureis the design of acomputer network. It is a framework for the specification of a network's physical components and their functional organization and configuration, its operational principles and procedures, as well ascommunication protocolsused. Intelecommunications, the specification of a network architecture may also include a detailed description of products and services delivered via a communications network, as well as detailed rate and billing structures under which services are compensated. The network architecture of theInternetis predominantly expressed by its use of theInternet protocol suite, rather than a specific model for interconnecting networks or nodes in the network, or the usage of specific types of hardware links. TheOpen Systems Interconnection model(OSI model) defines and codifies the concept of layered network architecture.Abstraction layersare used to subdivide acommunications systemfurther into smaller manageable parts. A layer is a collection of similar functions that provide services to the layer above it and receives services from the layer below it. On each layer, an instance provides services to the instances at the layer above and requests services from the layer below.[2] Indistributed computing, thenetwork architectureoften describes the structure and classification of a distributed application architecture, as the participating nodes in a distributed application are often referred to as anetwork.[3]For example, theapplications architectureof thepublic switched telephone network(PSTN) has been termed theIntelligent Network. There are a number of specific classifications but all lie on a continuum between thedumb network(e.g. theInternet) and the intelligent network (e.g. the PSTN). A popular example of such usage of the term in distributed applications, as well aspermanent virtual circuits, is the organization of nodes inpeer-to-peer (P2P) services and networks. P2P networks usually implementoverlay networksrunning over an underlying physical or logical network. These overlay networks may implement certain organizational structures of the nodes according to several distinct models, the network architecture of the system.[citation needed]
https://en.wikipedia.org/wiki/Network_architecture
Anetwork on a chipornetwork-on-chip(NoC/ˌɛnˌoʊˈsiː/en-oh-SEEor/nɒk/knock)[nb 1]is anetwork-basedcommunications subsystemon anintegrated circuit("microchip"), most typically betweenmodulesin asystem on a chip(SoC). The modules on the IC are typically semiconductorIP coresschematizing various functions of thecomputer system, and are designed to bemodularin the sense ofnetwork science. The network on chip is arouter-basedpacket switchingnetwork between SoCmodules. NoC technology applies the theory and methods ofcomputer networkingto on-chipcommunicationand brings notable improvements over conventionalbusandcrossbarcommunication architectures. Networks-on-chip come in manynetwork topologies, many of which are still experimental as of 2018.[citation needed] In 2000s, researchers had started to propose a type of on-chip interconnection in the form ofpacket switchingnetworks[1]in order to address the scalability issues ofbus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires.[2]Then, the concept of "network on chips" was proposed in 2002.[3]NoCs improve thescalabilityof systems-on-chip and thepower efficiencyof complex SoCs compared to other communication subsystem designs. They are anemerging technology, with projections for large growth in the near future asmulticorecomputer architectures become more common. NoCs can span synchronous and asynchronous clock domains, known asclock domain crossing, or use unclockedasynchronouslogic. NoCs supportglobally asynchronous, locally synchronouselectronics architectures, allowing eachprocessor coreor functional unit on the System-on-Chip to have its ownclock domain.[4] NoC architectures typically modelsparsesmall-world networks(SWNs) andscale-free networks(SFNs) to limit the number, length, area andpower consumptionof interconnection wires andpoint-to-pointconnections. The topology determines the physical layout and connections between nodes and channels. The message traverses hops, and each hop's channel length depends on the topology. The topology significantly influences bothlatencyand power consumption. Furthermore, since the topology determines the number of alternative paths between nodes, it affects the network traffic distribution, and hence thenetwork bandwidthand performance achieved.[5] Traditionally, ICs have been designed with dedicatedpoint-to-pointconnections, with one wire dedicated to each signal. This results in adense network topology. For large designs, in particular, this has several limitations from aphysical designviewpoint. It requirespowerquadraticin the number of interconnections. The wires occupy much of thearea of the chip, and innanometerCMOStechnology, interconnects dominate both performance and dynamicpower dissipation, as signal propagation in wires across the chip requires multipleclock cycles. This also allows moreparasitic capacitance,resistance and inductanceto accrue on the circuit. (SeeRent's rulefor a discussion of wiring requirements for point-to-point connections). Sparsityandlocalityof interconnections in the communications subsystem yield several improvements over traditionalbus-based andcrossbar-based systems. The wires in the links of the network-on-chip are shared by manysignals. A high level ofparallelismis achieved, because alldata linksin the NoC can operate simultaneously on differentdata packets.[why?]Therefore, as the complexity ofintegrated systemskeeps growing, a NoC provides enhanced performance (such asthroughput) andscalabilityin comparison with previous communication architectures (e.g., dedicated point-to-point signalwires, sharedbuses, or segmented buses withbridges). Thealgorithms[which?]must be designed in such a way that they offerlarge parallelismand can hence utilize the potential of NoC. Some researchers[who?]think that NoCs need to supportquality of service(QoS), namely achieve the various requirements in terms ofthroughput, end-to-end delays,fairness,[6]anddeadlines.[citation needed]Real-time computation, including audio and video playback, is one reason for providing QoS support. However, current system implementations likeVxWorks,RTLinuxorQNXare able to achieve sub-millisecond real-time computing without special hardware.[citation needed] This may indicate that for manyreal-timeapplications the service quality of existing on-chip interconnect infrastructure is sufficient, and dedicatedhardware logicwould be necessary to achieve microsecond precision, a degree that is rarely needed in practice for end users (sound or video jitter need only tenth of milliseconds latency guarantee). Another motivation for NoC-levelquality of service(QoS) is to support multiple concurrent users sharing resources of a singlechip multiprocessorin a publiccloud computinginfrastructure. In such instances, hardware QoS logic enables the service provider to makecontractual guaranteeson the level of service that a user receives, a feature that may be deemed desirable by some corporate or government clients.[citation needed] Many challenging research problems remain to be solved at all levels, from the physical link level through the network level, and all the way up to the system architecture and application software. The first dedicated research symposium on networks on chip was held atPrinceton University, in May 2007.[7]The secondIEEEInternational Symposium on Networks-on-Chip was held in April 2008 atNewcastle University. Research has been conducted on integratedoptical waveguidesand devices comprising an optical network on a chip (ONoC).[8][9] The possible way to increasing the performance of NoC is use wireless communication channels betweenchiplets— named wireless network on chip (WiNoC).[10] In a multi-core system, connected by NoC, coherency messages and cache miss requests have to pass switches. Accordingly, switches can be augmented with simple tracking and forwarding elements to detect which cache blocks will be requested in the future by which cores. Then, the forwarding elements multicast any requested block to all the cores that may request the block in the future. This mechanism reduces cache miss rate.[11] NoC development and studies require comparing different proposals and options. NoC traffic patterns are under development to help such evaluations. Existing NoC benchmarks include NoCBench and MCSL NoC Traffic Patterns.[12] An interconnect processing unit (IPU)[13]is an on-chip communication network withhardwareandsoftwarecomponents which jointly implement key functions of differentsystem-on-chipprogramming models through a set of communication andsynchronization primitivesand providelow-levelplatform services to enable advanced features[which?]in modern heterogeneous applications[definition needed]on a singledie. Adapted fromAvinoam Kolodny's's column in the ACMSIGDAe-newsletterbyIgor MarkovThe original text can be found athttp://www.sigda.org/newsletter/2006/060415.txt
https://en.wikipedia.org/wiki/Network_on_a_chip
Inmathematics, acovering setfor asequence of integersrefers to asetofprime numberssuch thateveryterm in thesequenceisdivisiblebyat least onemember of the set.[1]The term "covering set" is used only in conjunction with sequences possessingexponential growth. The use of the term "covering set" is related toSierpinskiandRiesel numbers. These areoddnatural numberskfor which the formulak2n+ 1(Sierpinski number) ork2n− 1(Riesel number) produces no prime numbers.[2]Since 1960 it has been known that there exists aninfinitenumber of both Sierpinski and Riesel numbers (as solutions to families ofcongruencesbased upon the set{3, 5, 17, 257, 641, 65537, 6700417}[a]but, because there are an infinitude of numbers of the formk2n+ 1ork2n− 1for anyk, one can onlyprovekto be a Sierpinski or Riesel number through showing thateveryterm in the sequencek2n+ 1ork2n− 1is divisible by one of the prime numbers of a covering set. These covering sets form from prime numbers that inbase 2have short periods. To achieve a complete covering set,Wacław Sierpińskishowed that a sequence can repeat no more frequently than every 24 numbers. A repeat every 24 numbers give the covering set{3, 5, 7, 13, 17, 241}, while a repeat every 36 terms can give several covering sets:{3, 5, 7, 13, 19, 37, 73};{3, 5, 7, 13, 19, 37, 109};{3, 5, 7, 13, 19, 73, 109} and{3, 5, 7, 13, 37, 73, 109}.[4] Riesel numbers have the same covering sets as Sierpinski numbers. Covering sets (thus Sierpinski numbers and Riesel numbers) also exists forbasesother than 2.[5][6][7] Covering sets are also used to prove the existence of compositegeneralized Fibonacci sequenceswith first two termscoprime(primefree sequence), such as the sequence starting with 20615674205555510 and 3794765361567513. The concept of a covering set can easily be generalised to other sequences which turn out to be much simpler. In the following examples + is used as it is inregular expressionsto mean 1 or more. For example, 91+3 means the set{913, 9113, 91113, 911113, …}. An example are the following eight sequences: In each case, every term is divisible by one of the primes from the set{3, 7, 11, 13}.[8]These primes can be said to form a covering set exactly analogous to Sierpinski and Riesel numbers.[9]The covering set{3, 7, 11, 37} is found for several similar sequences,[9]including: Also for bases other than 10: The covering set of them is{5, 13, 29} An even simpler case can be found in the sequence: Here, it can be shown that if: Thus we have a covering set with only three primes{3, 7, 13}.[10]This is only possible because the sequence gives integer termsonly for odd n. A covering set also occurs in the sequence: Here, it can be shown that: Since(7·10k− 1) / 3can be written as 23+, for the sequence 381+, we have a covering set of {3, 37, 23+} – a covering set withinfinitely manyterms.[9] The status for (343×10n− 1)/9 is like that for 3511808×63n+ 1: Thus we have a covering of {37, 109, 152×63 + 1, 152×632+ 1, 152×633+ 1, ...} or {37, 109, 2Q0+1 in base 63} – a covering set withinfinitely manyterms. A more simple example is 4×9n− 1, it is equal to (2×3n− 1) × (2×3n+ 1), thus its covering sets are{5, 17, 53, 161, 485, ...} and{7, 19, 55, 163, 487, ...}, more generally, ifkandbare bothr-th powers for an oddr> 1, thenk×bn+ 1 cannot be prime, and ifkandbare bothr-th powers for anr> 1 thenk×bn− 1 cannot be prime. Another example is 1369×30n− 1, its covering is {7, 13, 19, 37×30k− 1 (k= 1, 2, 3, ...)}
https://en.wikipedia.org/wiki/Covering_set
Inalgebraic geometryand related areas ofmathematics,local analysisis the practice of looking at a problem relative to eachprime numberpfirst, and then later trying to integrate the information gained at each prime into a 'global' picture. These are forms of thelocalizationapproach. Ingroup theory, local analysis was started by theSylow theorems, which contain significant information about the structure of afinite groupGfor each prime numberpdividing the order ofG. This area of study was enormously developed in the quest for theclassification of finite simple groups, starting with theFeit–Thompson theoremthat groups of odd order aresolvable.[1] Innumber theoryone may study aDiophantine equation, for example, modulopfor all primesp, looking for constraints on solutions.[2]The next step is to look modulo prime powers, and then for solutions in thep-adic field. This kind of local analysis provides conditions for solution that arenecessary. In cases where local analysis (plus the condition that there are real solutions) provides alsosufficientconditions, one says that theHasse principleholds: this is the best possible situation. It does forquadratic forms, but certainly not in general (for example forelliptic curves). The point of view that one would like to understand what extra conditions are needed has been very influential, for example forcubic forms. Some form of local analysis underlies both the standard applications of theHardy–Littlewood circle methodinanalytic number theory, and the use ofadele rings, making this one of the unifying principles across number theory.
https://en.wikipedia.org/wiki/Local_analysis
Inalgebraic number theory, theGrunwald–Wang theoremis alocal-global principlestating that—except in some precisely defined cases—an elementxin anumber fieldKis annth power inKif it is annth power in thecompletionKp{\displaystyle K_{\mathfrak {p}}}for all but finitely many primesp{\displaystyle {\mathfrak {p}}}ofK. For example, arational numberis a square of a rational number if it is a square of ap-adicnumberfor almost allprime numbersp. It was introduced byWilhelm Grunwald(1933), but there was a mistake in this original version that was found and corrected byShianghao Wang(1948). The theorem considered by Grunwald and Wang was more general than the one stated above as they discussed the existence of cyclic extensions with certain local properties, and the statement aboutnth powers is a consequence of this. Some days later I was withArtinin his office when Wang appeared. He said he had a counterexample to a lemma which had been used in the proof. An hour or two later, he produced a counterexample to the theorem itself... Of course he [Artin] was astonished, as were all of us students, that a famous theorem with two published proofs, one of which we had all heard in the seminar without our noticing anything, could be wrong. Grunwald (1933), a student ofHelmut Hasse, gave an incorrect proof of the erroneous statement that an element in a number field is annth power if it is annth power locally almost everywhere. George Whaples (1942) gave another incorrect proof of this incorrect statement. HoweverWang (1948)discovered the followingcounterexample: 16 is ap-adic8th power for alloddprimesp, but is not a rational or2-adic8th power. In his doctoral thesisWang (1950)written underEmil Artin, Wang gave and proved the correct formulation of Grunwald's assertion, by describing the rare cases when it fails. This result is what is now known as the Grunwald–Wang theorem. The history of Wang's counterexample is discussed byPeter Roquette(2005, section 5.3) Grunwald's original claim that an element that is annth power almost everywhere locally is annth power globally can fail in two distinct ways: the element can be annth power almost everywhere locally but not everywhere locally, or it can be annth power everywhere locally but not globally. The element 16 in the rationals is an 8th power at all places except 2, but is not an 8th power in the 2-adic numbers. It is clear that 16 is not a 2-adic 8th power, and hence not a rational 8th power, since the2-adic valuationof 16 is 4 which is not divisible by 8. Generally, 16 is an 8th power in afieldKif and only if thepolynomialX8−16{\displaystyle X^{8}-16}has arootinK. Write Thus, 16 is an 8th power inKif and only if 2, −2 or −1 is a square inK. Letpbe any odd prime. It follows from the multiplicativity of theLegendre symbolthat 2, −2 or −1 is a square modulop. Hence, byHensel's lemma, 2, −2 or −1 is a square inQp{\displaystyle \mathbb {Q} _{p}}. 16 is not an 8th power inQ(7){\displaystyle \mathbb {Q} ({\sqrt {7}}\,)}although it is an 8th power locally everywhere (i.e. inQp(7){\displaystyle \mathbb {Q} _{p}({\sqrt {7}}\,)}for allp). This follows from the above and the equalityQ2(7)=Q2(−1){\displaystyle \mathbb {Q} _{2}({\sqrt {7}}\,)=\mathbb {Q} _{2}({\sqrt {-1}}\,)}. Wang's counterexample has the following interesting consequence showing that one cannot always find acyclicGalois extensionof a given degree of a number field in which finitely many given prime places split in a specified way: There exists no cyclic degree-8 extensionK/Q{\displaystyle K/\mathbb {Q} }in which the prime 2 is totally inert (i.e., such thatK2/Q2{\displaystyle K_{2}/\mathbb {Q} _{2}}is unramified of degree 8). For anys≥2{\displaystyle s\geq 2}let Note that the2s{\displaystyle 2^{s}}thcyclotomic fieldis A field is calleds-specialif it containsηs{\displaystyle \eta _{s}}, but neitheri{\displaystyle i},ηs+1{\displaystyle \eta _{s+1}}noriηs+1{\displaystyle i\eta _{s+1}}. Consider a number fieldKand anatural numbern. LetSbe a finite (possibly empty) set of primes ofKand put The Grunwald–Wang theorem says that unless we are in thespecial casewhich occurs when the following two conditions both hold: In the special case the failure of the Hasse principle is finite of order 2: the kernel of isZ/2Z, generated by the element ηns+1. The field of rational numbersK=Q{\displaystyle K=\mathbb {Q} }is 2-special since it containsη2=0{\displaystyle \eta _{2}=0}, but neitheri{\displaystyle i},η3=2{\displaystyle \eta _{3}={\sqrt {2}}}noriη3=−2{\displaystyle i\eta _{3}={\sqrt {-2}}}. The special set isS0={2}{\displaystyle S_{0}=\{2\}}. Thus, the special case in the Grunwald–Wang theorem occurs whennis divisible by 8, andScontains 2. This explains Wang's counterexample and shows that it isminimal. It is also seen that an element inQ{\displaystyle \mathbb {Q} }is annth power if it is ap-adicnth power for allp. The fieldK=Q(7){\displaystyle K=\mathbb {Q} ({\sqrt {7}}\,)}is 2-special as well, but withS0=∅{\displaystyle S_{0}=\emptyset }. This explains the other counterexample above.[1]
https://en.wikipedia.org/wiki/Grunwald%E2%80%93Wang_theorem
Inmathematics, theGrothendieck–Katz p-curvature conjectureis alocal-global principleforlinear ordinary differential equations, related todifferential Galois theoryand in a loose sense analogous to the result in theChebotarev density theoremconsidered as thepolynomialcase. It is aconjectureofAlexander Grothendieckfrom the late 1960s, and apparently not published by him in any form. The general case remains unsolved, despite recent progress; it has been linked to geometric investigations involving algebraicfoliations. In a simplest possible statement the conjecture can be stated in its essentials for a vector system written as for a vectorvof sizen, and ann-by-nmatrixAofalgebraic functionswithalgebraic numbercoefficients. The question is to give a criterion for when there is afull setof algebraic function solutions, meaning a fundamental matrix (i.e.nvector solutions put into ablock matrix). For example, a classical question was for thehypergeometric equation: when does it have a pair of algebraic solutions, in terms of its parameters? The answer is known classically asSchwarz's list. Inmonodromyterms, the question is of identifying the cases of finite monodromy group. By reformulation and passing to a larger system, the essential case is for rational functions inAand rational number coefficients. Then a necessary condition is that foralmost allprime numbersp, the system defined by reduction modulopshould also have a full set of algebraic solutions, over the finite field withpelements. Grothendieck's conjecture is that these necessary conditions, for almost allp, should be sufficient. The connection withp-curvatureis that the modpcondition stated is the same as saying thep-curvature, formed by a recurrence operation onA,[1]is zero; so another way to say it is thatp-curvature of 0 for almost allpimplies enough algebraic solutions of the original equation. Nicholas Katzhas appliedTannakian categorytechniques to show that this conjecture is essentially the same as saying that thedifferential Galois groupG(or strictly speaking theLie algebragof thealgebraic groupG, which in this case is theZariski closureof the monodromy group) can be determined by modpinformation, for a certain wide class of differential equations.[2] A wide class of cases has beenprovedbyBenson FarbandMark Kisin;[3]these equations are on alocally symmetric varietyXsubject to some group-theoretic conditions. This work is based on the previous results of Katz forPicard–Fuchs equations(in the contemporary sense of theGauss–Manin connection), as amplified in the Tannakian direction by André. It also applies a version ofsuperrigidityparticular toarithmetic groups. Other progress has been by arithmetic methods.[4] Nicholas Katz related some cases todeformation theoryin 1972, in a paper where the conjecture was published.[5]Since then, reformulations have been published. Aq-analoguefordifference equationshas been proposed.[6] In responding to Kisin's talk on this work at the 2009 Colloque Grothendieck,[7]Katz gave a brief account from personal knowledge of the genesis of the conjecture. Grothendieck put it forth in public discussion in Spring 1969, but wrote nothing on the topic. He was led to the idea by foundational intuitions in the area ofcrystalline cohomology, at that time being developed by his studentPierre Berthelot. In some way wishing to equate the notion of "nilpotence" in the theory of connections, with thedivided power structuretechnique that became standard in crystalline theory, Grothendieck produced the conjecture as a by-product.
https://en.wikipedia.org/wiki/Grothendieck%E2%80%93Katz_p-curvature_conjecture
Asmudge attackis an information extraction attack thatdiscerns the passwordinput of atouchscreendevice such as a smartphone ortablet computerfrom fingerprint smudges. A team of researchers at theUniversity of Pennsylvaniawere the first to investigate this type of attack in 2010.[1][2]An attack occurs when an unauthorized user is in possession or is nearby the device of interest. The attacker relies on detecting the oily smudges produced and left behind by the user's fingers to find the pattern or code needed to access the device and its contents.[2]Simple cameras, lights,fingerprint powder, andimage processing softwarecan be used to capture the fingerprint deposits created when the user unlocks their device. Under proper lighting and camera settings, the finger smudges can be easily detected, and the heaviest smudges can be used to infer the most frequent input swipes or taps from the user.[1] Smudge attacks are particularly successful when performed on devices that offerpersonal identification numbers(PINs), text-based passwords, and pattern-based passwords as locking options.[3]There are various proposed countermeasures to mitigate attacks, such asbiometrics, TinyLock, and SmudgeSafe, all which are different authentication schemes.[4][5][6]Many of these methods provide ways to either cover up the smudges using a stroking method or implement randomized changes so previous logins are different from the current input. The smudge attack method against smartphone touch screens was first investigated by a team ofUniversity of Pennsylvaniaresearchers and reported at the 4thUSENIXWorkshop on Offensive Technologies. The team classified the attack as a physicalside-channel attackwhere the side-channel is launched from the interactions between a finger and the touchscreen. The research was widely covered in the technical press, including reports onPC Pro,ZDNet,[7]andEngadget.[8]The researchers used the smudges left behind on two Android smartphones and were able to break the password fully 68% of the time and partially 92% of the time under proper conditions.[1] Once the threat was recognized,Whisper Systemsintroduced an app in 2011 to mitigate the risk. The app provided their own versions of a pattern lock and PIN authentication that required users to complete certain tasks to cover up the smudges created during the authentication process. For the PIN verification option, the number options were vertically lined-up, and user were required to swipe downward over the smudged area. For the pattern lock, the app presented a 10x10 grid of stars the users had to swipe over and highlight before accessing the home screen.[9][10] Interpreting the smudges on the screen requires less equipment, and there is less experience needed to be an attacker. In combination with the negative ramifications for victims of an attack, there is a lot of concern in relation to this type of attack. The smudge attack approach could also be applied to other touchscreen devices besides mobile phones that require an unlocking procedure, such asautomatic teller machines (ATMs), home locking devices, and PIN entry systems in convenience stores. Those who use touchscreen devices or machines that contain or store personal information are at a risk of data breaches. The human tendency for minimal and easy-to-rememberPINsand patterns also lead toweak passwords, and passwords from weak password subspaces increase the ease at which attackers can decode the smudges.[11] Smudge attacks are particularly dangerous since fingerprint smudges can be hard to remove from touchscreens, and the persistence of these fingerprints increases the threat of an attack. The attack does not depend on finding perfect smudge prints, and it is still possible for attackers to figure out the password even after cleaning the screen with clothing or with overlapping fingerprints.[2]Chaet al.[12]in their paper, "Boosting the Guessing Attack Performance on Android Lock Patterns with Smudge Attacks," tested an attack method called smug that combined smudge attacks and pure guessing attacks. They found that even after the users were asked to use the Facebook app after unlocking the device, 31.94% of the phones were cracked and accessed.[12] Another danger of smudge attacks is that the basic equipment needed to perform this attack, a camera and lights, is easily obtainable. Fingerprint kits are also an accessible and additional, but not required, piece of equipment ranging from $30-$200. These kits increase the ease with which an attacker can successfully break into a phone in possession.[13] The team at the University of Pennsylvania identified and considered two types of attackers: passive and active. An active attacker is classified as someone who has the device in hand and is in control of the lighting setup and angles. These attackers can alter the touchscreen in a way to better identify the PIN or pattern code by cleaning or using fingerprint powder.[2]A typical setup from an active attacker could include a mounted camera, the phone placed on a surface, and a single light source. Slight variations in the setup include the type and size of the light source and the distance between the camera and the phone. A more experienced attacker would pay closer attention to the angle of the light and camera, the lighting source, and the type of camera and lens used to get the best picture, taking into account the shadows and highlights when the light reflects.[1] A passive attacker is an observer who does not have the device in hand and instead has to perform an eavesdropping-type attack.[2]This means they will wait for the right opportunity to collect the fingerprint images until they can get in possession of the gadget. The passive attacker does not have control of the lighting source, the angle, the position of the phone, and the condition of the touchscreen. They are dependent on the authorized user and their location to get a good quality picture to crack the security code later on.[1] There are different steps and techniques that attackers use to isolate the fingerprint smudges to determine the lock pattern or PIN. The attacker first has to identify the exact touch screen area, any relevant smudges within that area, and any possible combination or pattern segments.[12] In the cases where the fingerprints are not super visible to the eye, preprocessing is used to identify the most intact fingerprints determined by the number of ridge details they have. Selecting the fingerprints with the most ridge details differentiates between the user's fingerprints and those with whom the device is shared.[13]When pressing a finger down on the touch screen surface to create a fingerprint, the liquid from the edges of the ridges fill in the contact region. This fingerprint liquid is made up of substances from theepidermis, thesecretory glands, and extrinsic contaminants such as dirt or outside skin products. As the fingertip is lifted, the liquid also retracts, leaving behind the leftover traces.[14]Attackers are able to use fingerprint powder to dust over these oil smudges to unveil the visible fingerprint and their ridges. The powder can enhance thediffuse reflection, which reflects from rough surfaces and makes the dusted smudge more visible to the human eye. There are different powders to choose from based on the colors that best contrasts with the touchscreen and the environment. Examples of powders are aluminum, bronze, cupric oxide, iron, titanium dioxide, graphite, magnetic, and fluorescent powder. This dusting action also mimics the processes used in a crime scene investigation.[13] Preserving fingerprints utilizes a camera to capture multiple pictures of the fingerprint images or the keypad with different light variations. Generally,high-resolutioncameras and bright lights work the best for identifying smudges. The goal is to limit any reflections and isolate the clear fingerprints.[13] The visibility of the fingerprint relies on the light source, the reflection, and shadows. The touch screen and surface of a smart device can have different reflections that change how someone views the image of the fingerprint.[13] Fingerprint mapping uses the photographed smudge images to figure out what keys were used by laying the smudge images over the keypad or by comparing the image with a reference picture. Mapping the positions of smudges helps the attacker figure out which tapped keys were used by the authorized user. First, the fingerprints and keypad images are resized and processed to find the areas the corresponding fingerprints and keys occupy. Next, the Laplace edge detection algorithm is applied to detect the edges of the ridges of a finger, sharpen the overall fingerprint, and eliminate any of the background smudges. The photo is then converted into abinaryimage to create a contrast between the white fingerprints and the black background. Using this image with grid divisions also helps clarify where the user has tapped based on the locations with the largest number of white dots in each grid area.[13] In the case that there are multiple users, grouping fingerprints can help classify which ones belong to each person. Fingerprints have both ridges and valleys, and differentiating them is determined by the overall and local ridge structure. There are three patterns of fingerprint ridges–arch, loop, andwhorl– that represent the overall structure, and the ridge endings or bifurcation represent the local structure orminutiaepoints.[4]Different algorithms incorporate these fingerprint traits and structure to group the fingerprints and identify the differences. Some examples of algorithms used are Filterbank, adjacent orientation vector (AOV) system, and correlation-filter.[13] Smug is a specific attack method that combinesimage processingwith sorting patterns to figure out pattern-based passwords. First, the attackers take a picture of the smudge area using an appropriate camera and lighting. Using animage-matchingalgorithm, the captured image is then compared to a reference picture of the same device to properly extract a cropped picture focused on the smudges. Next, the smudge objects are identified using binary,Canny edge detection, and Hough transformation to enhance the visibility of the fingerprint locations. Possible segments between the swipes and points are detected with an algorithm to form the target pattern. The segments are then filtered to remove unwanted and isolated edges to only keep the edges that follow the segment direction. These segments are identified by figuring out if the smudge between two grid points is part of a pattern after comparing the number of smudge objects against the set threshold. Lastly, these segments are used in a password model to locate potential passwords (e.g.n-gramMarkov model). An experiment conducted found that this method was successful in unlocking 360 pattern codes 74.17% of the time when assisted by smudge attacks, an improvement from 13.33% for pure guessing attacks.[12][16] Smudge attacks can be performed on various smart device locking methods such as Android Patterns, PINs, and text-based passwords. All of these authentication methods require the user to tap the screen to input the correct combination, which leads to susceptibility to smudge attacks that look for these smudges.[17] PINsare not only susceptible to smudge attacks but other attacks possible through direct observation likeshoulder-surfingattacks or just pure guessing likebrute-force attacks. They are also used heavily inelectronic transactionsor for usingATMsand other banking situations. If a PIN is shared or stolen, the device or machine cannot detect whether the user is the rightful owner since it only relies on if the correct number is inputted. In relation to smudge attacks, this allows attackers to easily steal information since there is no other way to authenticate the user for who they actually are.[18] Touchscreen devices that use text-basedpasswordswill contain fingerprint smudges in the location of corresponding numbers or letters on the alphanumeric keypad. Attackers can use this to perform the smudge attack. The downfall to text-based passwords is not only its vulnerability to smudge attacks but also the tendency of users to forget the password. This causes many users to use something that is easy to remember or to reuse multiple passwords across different platforms. These passwords fall under what is called a weak password subspace within the full password space and makes it easier for attackers to break in throughbrute-forcedictionary attacks.[11]A 2017 study reviewed 3289 passwords, and 86% of them had some sort of structural similarity such as containing dictionary words and being short.[19] Draw-a-Secretis a graphical authentication scheme that requires the users to draw lines or points on a two-dimensional grid. A successful authentication depends on if the user can exactly replicate the path drawn. Android Pattern Password is a version of Pass-Go that follows the concept of DAS.[20][21] Pass-Go uses a grid so that there isn’t a need to store a graphical database and allows the user to draw a password as long as they want. Unlike DAS, the scheme relies on selecting the intersections on a grid instead of the cells on the screen, and users can also draw diagonal lines. Tao and Adam who proposed this method found that over their three month study, many people drew longer pattern passwords, which goes against the tendency to choose minimal and easy-to-remember passwords.[22] Android pattern lock is a graphical password method introduced by Google in 2008 where users create a pattern on a line-connecting 3x3 grid.[16]About 40% of Android users use pattern lock to secure their phones.[16]There are 389,112 possible patterns that the user can draw up.[23]Each pattern must contain at least 4 points on the grid, use each contact point once, and cannot skip intermediate points between points unless it's been used earlier.[21]Touchscreen devices that use Android pattern lock will leave behind swipes that give away the right location and combination an attacker needs to unlock the phone as an unauthorized user. The security of Android pattern lock against smudge attacks was tested by researchers at the University of Pennsylvania, and from the swipes left behind from the drawn pattern, they were able to discern the code fully 68% of the time and partially 92% of the time under proper conditions.[1] Physiological biometrics such as Android Face Unlock, iPhoneTouch IDandFace ID, and Trusted Voice have been recently implemented in mobile devices as the main or alternative method of validation. There are also other novel ways that have potential to be a future security scheme but haven't been implemented yet into mainstream usage.[24]Some of these ways avoid the requirement to input anything with their fingers and thus eliminating the ability for attackers to use smudges to determine the password lock. Although there are many countermeasures that help protect against smudge attacks, creatingsecure passwordscan be the first step to protecting a device. Some of the recommended steps are:[25] Although these are the recommended tips for stronger passwords, users can run out of strong password options they will remember and later forget the passcode after frequent changes. To avoid this, users tend to choose short, weaker passwords to make it more convenient and shorten the unlocking time.[26] Researchers have looked into anti-fingerprint properties that can allow people to keep their current password schemes and not worry about the leftover smudges. Surfaces that are able to repel the water and oils from the finger are called lipophobic. Surfaces that have lowsurface energyand surface transparency (low roughness) are typically anti-smudge due to their higher contact angles and lowmolecular attraction. Low molecular attraction means that there is little to no adhesion for the oil and water molecules to bind to the surface and leave behind a trace. However, achieving these properties while still functioning as a touchscreen is hard as the low surface energy alters the durability and functionality of the touchscreen itself.[14] With this research, various anti-smudge screen protectors have been put on the market such as Tech Armor's anti-glare and anti-fingerprint film screen protector and ZAGG's InvisibleShield Premium Film and Glass Elite (tempered glass) antimicrobial screen protectors. ZAGG markets its InvisibleShield as smudge resistant, glare resistant, and scratch proof.[27]These phone accessories can range from 30 to 60 dollars.[28] There have also been various smartphones on the market that have been pitched as having anoleophobiccoating, which resists oil to keep the touchscreen free from fingerprints. The oleophobic screen beads up any oil residuals, preventing them from sticking to the surface and making it easy to wipe finger residuals off without smearing.[29]In July 2016, Blackberry released theDTEK50smartphone with an oleophobic coating.[30][28]Other phone developers have used this for the touchscreens of their devices such as Apple's many generations of iPhones,[31][32]Nokia, andLumia. andHTC Hero.[33] Biometricsis a type ofauthenticationthat identifies a user based on their behavior or physical characteristics, such askeystrokes,gait, andfacial recognitionrather than what one can recall or memorize.[4]A biometrics system takes the unique features from the individual and records them as a biometric template, and the information is compared with the current captured input to authenticate a user.[34]Biometrics is categorized as either physiological or behavioral by the USNational Science and Technology Council’sSubcommittee (NSTC) on Biometrics.[35]This type of security can serve as a secondary protection to traditional password methods that are susceptible to smudge attacks on their own since it doesn't rely on entering a memorized number or pattern or recalling an image. Research conducted on biometric authentication found that a mix or hybrid of biometrics and traditional passwords or PINs can improve the security and usability of the original system.[36] One of the downsides to biometrics is mimicry attacks where the attackers mimic the user. This can increase the vulnerability of the device if attackers turn to methods that allow them to copy the victim’s behavior. Some of these methods include using a reality-based app that guide attackers when entering the victim’s phone or using transparent film with pointers and audio cues to mimic the victim’s behavior.[37]Another vulnerability is that the biometric template can be leaked or stolen through hacking or other various means to unauthorized people.[38][39]A possible solution to any theft, leak, or mimicry are fingerprint template protection schemes as they make it difficult for attackers to access the information throughencryptionand added techniques.[36][38] Physiological biometrics authenticates a user based on their human characteristics. Measuring the characteristics unique to each individual creates a stable and mostly consistent mechanism to authenticate a person since these features do not change very quickly. Some examples of physiological biometric authentication methods are listed below.[35] Behavioral biometrics authenticates a user based on the behavior, habits, and tendencies of the true user. Some examples includevoice recognition,gait, hand-waving, andkeystroke dynamics.[35]The schemes listed below have been proposed to specifically protect from smudge attacks. SmudgeSafe is another authentication method protected from smudge attacks that uses2-dimensionimage transformations to rotate, flip, or scale the image at the login screen page. The user will draw a graphical password shaper created from the points on an image as usual, but the image will look different every time the user logs in. The changes done on the image are randomized, so previous login smudges do not give hints to attackers on what the input is. To ensure that the transformations applied will significantly change the locations of the password points, the area of these specific locations on the image is restricted. In a study comparing SmudgeSafe's graphical authentication method to lock patterns and PINs, SmudgeSafe performed the best with a mean of 0.51 passwords guessed per participant. The pattern lock had a mean of 3.50 and PINs had a mean of 1.10 passwords correctly guessed per participant.[6] TinyLock was proposed by Kwon et al.[5]and uses two grids; the top one is for the pressed cells for the confirmation process, and the bottom one is a drawing pad for the authentication process.[5]The top grid is used to notify the user by flickering and vibrating if the user is on the correct initial dot before they start drawing. The bottom half of the screen contains a tiny 3 x 3 grid used for drawing the secret password. The grid is much smaller in size compared to traditional pattern locks, which forces the user to draw in a confined space to squeeze all the smudges in a small area. This method mitigates smudge attacks because the smudges are all smushed together, and the users are required to draw a circular virtual wheel in either direction after drawing the pattern password. However, this method is not completely free from shoulder-surfing attacks.[20]Also, another drawback is the grid dots are hard to visualize due to the small size, which makes it difficult to draw complex patterns and unlock without error.[16] ClickPattern uses a 3 x 3 grid labeled one through nine, and the user has to click on the nodes that correlate with the end of a drawn line to prevent swiping on the screen. Doing this creates smudges that are harder to distinguish from normal screen usage. If anything, the smudges created will reveal the nodes used but not the pattern, thus being more protected from smudge attacks than Android pattern lock. On the lock screen, ClickPattern consists of these three components:[42] The user is authenticated when the inputted pattern is the same as the original pattern and in the same exact order and direction. To create a valid pattern, the pattern must have at least 4 points and none of them can be used more than once. The pattern will also always contain dots in between a sequence, even though it does not necessarily need to be clicked. Users can also go through previously used dots to access an unused node.[42] This multi-touch authentication uses geometric and behavioral characteristics to verify users on a touch screen device. According to Songet al.,[43]this TFST gesture takes an average of 0.75 seconds to unlock, is very easy to use, and simple to follow. The user puts two to four fingers together in a straight position, decreasing the amount of surface compared to other multi-touch methods. With the fingers in this fixed hand posture, the user can choose to either trace a simple or complex pattern, and the screen will pick up the positions of the fingers and record each trace movement in the form of touch events. These touch events account for the X and Y-coordinates, the amount of pressure applied, the finger size, the timestamp, and the size of the touched area, and are compared to the template created during the registration process.[19]The physiological features orhand geometryinclude a measurement between possible strokes from the performed gesture. Horizontal strokes track the finger length differences, and vertical strokes track the finger width. Since the user always places their fingers in a straight position, the measurements of the finger will stay the same and provide consistent verification. Lastly, there are behavioral features that are traced, specifically the length of the stroke, the time it takes, the velocity of the stroke, the tool or the area for each touch point in relation to finger size, the touch area size, the pressure applied, and the angle of the stroke. For one stroke, there are 13 behavioral features, and this increases to 26, 39, and 52 for up to four strokes.[43] With new technology geared towards creating aflexible displayfor smartphone devices, there are more opportunities to create novel authentication methods. Bend passwords are an original type of password authentication used for flexible screens. It involves different bend gestures that the users perform by twisting or disfiguring the display surface, and there are a total of 20 gestures currently available. The bending can be a part of a single gesture by individually bending one of the four corners of the display or part of a multi-bend gesture by simultaneously bending pairs of corners.[44] A new proposed authentication method called Fractal-Based Authentication Technique (FBAT) usesSierpinski’s Triangleto authenticate users. This process combines recognition-based and cued recall-based authentication as the users have to recognize and click on their personal pre-selected color triangles as the level of triangles increases. For smartphones, the level of triangles is set at 3 due to the limited size of the touch screen, but it can increase for bigger tablets. At level 3, the probability that an attacker will guess the password is 0.13%. Recognition-based requires users to recognize pre-selected images and cued recall-based graphical requires users to click on pre-selected points on an image. In the Sierpinski triangle, a selected colored pattern is created during the registration and is hidden in the device. To authenticate themselves, a user must select the correct pattern in each level while the triangles randomly shuffle. Since the colored triangles are randomly generated, they can be found in different locations for every authentication, thus leaving smudges behind that do not give any clues to potential attackers. This technique can be used on Android devices, ATM machines, laptops, or any device that uses authentication to unlock.[25] Knock Codeis authentication method introduced byLG Electronicsthat allows users to unlock a phone without turning it on by tapping the correct area in the right sequence. The screen is split into four sections, with the vertical and horizontal lines changing.[45]There are two variations of Knock Code that have been proposed—the 2 x 2 and 1 x 2 knock code. These variations can protect against smudge attacks due to the sliding operations that erase the knocking at the end after the taps are inputted. In a user study that compared the original Knock Code and the Android Pattern Lock, these variation schemes were more resistance to smudge attacks.[20] There has been movement towards physiological biometric authentication in current smartphone security such as fingerprint and facial recognition that allow the user to replace their PINs and alphanumeric passcodes.[4]However, even new and advanced authentication methods have flaws and weaknesses that users can take advantage of. For example, in an examination of touch authentication, researchers observed similar swiping behavior and finger pressure in a large number of phone users, and this generic information can aid attackers in performing successful attacks.[39]Research on biometrics and multi-gesture authentication methods is continuing to help combat attacks on traditional passwords and eliminate the vulnerabilities of novel schemes as new trends and new technology are developed.[18]
https://en.wikipedia.org/wiki/Smudge_attack
Inlinguistics, anonce word—also called anoccasionalism—is any word (lexeme), or any sequence ofsoundsorletters, created for a single occasion or utterance but not otherwise understood or recognized as a word in a given language.[1][2]Nonce words have a variety of functions and are most commonly used for humor, poetry, children's literature, linguistic experiments, psychological studies, and medical diagnoses, or they arise by accident. Some nonce words have a meaning at their inception or gradually acquire a fixed meaning inferred from context and use, but if they eventually become an established part of the language (neologisms), they stop being nonce words.[3]Other nonce words may be essentially meaningless and disposable (nonsense words), but they are useful for exactly that reason—the wordswugandblicketfor instance were invented by researchers to be used in child language testing.[4]Nonsense words often shareorthographicandphoneticsimilarity with (meaningful) words,[5]as is the case withpseudowords, which make no sense but can still be pronounced in accordance with a language'sphonotactic rules.[6]Such invented words are used by psychology and linguistics researchers and educators as tools to assess a learner's phonetic decoding ability, and the ability to infer the (hypothetical) meaning of a nonsense word from context is used to test forbrain damage.[7]Proper namesof real or fictional entities sometimes originate as nonce words. The term is used because such a word is created "for the nonce" (i.e., for the time being, or this once),[2]: 455coming fromJames Murray, editor of theOxford English Dictionary.[8]: 25Some analyses consider nonce words to fall broadly underneologisms, which are usually defined as words relatively recently accepted into a language's vocabulary;[9]other analyses do not.[3] A variety of more specific concepts used by scholars falls under the umbrella ofnonce words, of which overlap is also sometimes possible: Many types of other words can also be meaningful nonce words, as is true of mostsniglets(words, often stunt words, explicitly coined in the absence of any relevant dictionary word). Other types of misinterpretations or humorous re-wordings can also be nonce words, as may occur inword play, such as certain examples ofpuns,spoonerisms,malapropisms, etc. Furthermore, meaningless nonce words can occur unintentionally or spontaneously, for instance througherrors(typographicalor otherwise) or throughkeysmashes. Nonce words are sometimes used to study thedevelopment of languagein children, because they allow researchers to test how children treat words of which they have no prior knowledge. This permits inferences about the default assumptions children make about new word meanings, syntactic structure, etc. "Wug" is among the earliest known nonce words used in language learning studies, and is best known for its use inJean Berko's "Wug test", in which children were presented with a novel object, called a wug, and then shown multiple instances of the object and asked to complete a sentence that elicits a plural form—e.g., "This is a wug. Now there are two of them. There are two...?" The use of the plural form "wugs" by the children suggests that they have applied a plural rule to the form, and that this knowledge is not specific to prior experience with the word but applies to most English nouns, whether familiar or novel.[12] Nancy N. Soja,Susan Carey, andElizabeth Spelkeused "blicket", "stad", "mell", "coodle", "doff", "tannin", "fitch", and "tulver" as nonce words when testing to see if children's knowledge of the distinction between non-solid substances and solid objects preceded or followed their knowledge of the distinction betweenmass nounsandcount nouns.[13] A poem bySeamus Heaneytitled "Nonce Words" is included in his collectionDistrict and Circle.[14]David Crystalreportedfluddle, which he understood to mean a water spillage between a puddle and a flood, invented by the speaker because no suitable word existed. Crystal speculated in 1995 that it might enter the English language if it proved popular.[2]Boubaandkikiare used to demonstrate a connection between the sound of a word and its meaning.Grok, coined byRobert HeinleininStranger in a Strange Land, is now used by many to mean "deeply and intuitively understand".[15]The poem "Jabberwocky" is full of nonce words, of which two,chortleandgalumph, have entered into common use.[15]The novelFinnegans Wakeusedquark("three quarks for Muster Mark") as a nonce word; the physicistMurray Gell-Mannadopted it as the name of asubatomic particle.[16]
https://en.wikipedia.org/wiki/Nonce_word
Arandom seed(orseed state, or justseed) is anumber(orvector) used toinitializeapseudorandom number generator. A pseudorandom number generator's number sequence is completely determined by the seed: thus, if a pseudorandom number generator is later reinitialized with the same seed, it will produce the same sequence of numbers. For a seed to be used in a pseudorandom number generator, it does not need to be random. Because of the nature of number generating algorithms, so long as the original seed is ignored, the rest of the values that the algorithm generates will followprobability distributionin a pseudorandom manner. The choice of a good random seed is crucial in the field ofcomputer security. When a secretencryptionkeyispseudorandomlygenerated, having the seed will allow one to obtain the key. Highentropyis important for selecting good random seed data.[1] Random seeds need to be chosen carefully in order to ensure random number generation. If a seed is chosen that doesn't provide actual random results, the numbers given by thePRNG (pseudo random number generator)will not work properly in an application that needs them. Charting the output values of a PRNG with ascatter plotis a good way to find out if the seed is working. If the graph shows static, then the PRNG is giving random results, but if a pattern appears, the seed needs to be fixed.[2][3] If the samerandomseed is deliberately shared, it becomes asecret key, so two or more systems using matching pseudorandom number algorithms and matching seeds can generate matching sequences of non-repeating numbers which can be used to synchronize remote systems, such asGPSsatellites and receivers.[3] Random seeds are often generated from the state of the computer system (such as thetime), acryptographically secure pseudorandom number generatoror from ahardware random number generator. Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Random_seed
Chaffing and winnowingis acryptographictechnique to achieveconfidentialitywithout usingencryptionwhen sending data over aninsecure channel. The name is derived from agriculture: after grain has been harvested andthreshed, it remains mixed together with inedible fibrouschaff. The chaff and grain are then separated bywinnowing, and the chaff is discarded. The cryptographic technique was conceived byRon Rivestand published in an on-line article on 18 March 1998.[1]Although it bears similarities to both traditional encryption andsteganography, it cannot be classified under either category. This technique allows the sender to deny responsibility for encrypting their message. When using chaffing and winnowing, the sender transmits the message unencrypted, in clear text. Although the sender and the receiver share a secret key, they use it only forauthentication. However, a third party can make their communication confidential by simultaneously sending specially crafted messages through the same channel. The sender (Alice) wants to send a message to the receiver (Bob). In the simplest setup, Alice enumerates the symbols in her message and sends out each in a separatepacket. If the symbols are complex enough, such as natural language text, an attacker may be able to distinguish the real symbols from poorly faked chaff symbols, posing a similar problem as steganography in needing to generate highly realistic fakes; to avoid this, the symbols can be reduced to just single 0/1 bits, and realistic fakes can then be simply randomly generated 50:50 and are indistinguishable from real symbols. In general the method requires each symbol to arrive in-order and to be authenticated by the receiver. When implemented over networks that may change the order of packets, the sender places the symbol's serial number in the packet, the symbol itself (both unencrypted), and amessage authentication code(MAC). Many MACs use asecret keyAlice shares with Bob, but it is sufficient that the receiver has a method to authenticate the packets. Rivest notes an interesting property of chaffing-and-winnowing is that third parties (such as an ISP) can opportunistically add it to communications without needing permission or coordination with the sender/recipient. A third-party (dubbed"Charles") who transmits Alice's packets to Bob, interleaves the packets with corresponding bogus packets (called "chaff") with corresponding serial numbers, arbitrary symbols, and a random number in place of the MAC. Charles does not need to know the key to do that (real MACs are large enough that it is extremely unlikely to generate a valid one by chance, unlike in the example). Bob uses the MAC to find the authentic messages and drops the "chaff" messages. This process is called "winnowing". An eavesdropper located between Alice and Charles can easily read Alice's message. But an eavesdropper between Charles and Bob would have to tell which packets are bogus and which are real (i.e. to winnow, or "separate the wheat from the chaff"). That is infeasible if the MAC used is secure and Charles does not leak any information on packet authenticity (e.g. via timing). If a fourth party joins the example (namedDarth) who wants to send counterfeit messages to impersonate Alice, it would require Alice to disclose her secret key. If Darth cannot force Alice to disclose an authentication key (the knowledge of which would enable him to forge messages from Alice), then her messages will remain confidential. Charles, on the other hand, is no target of Darth's at all, since Charles does not even possess any secret keys that could be disclosed. The simple variant of the chaffing and winnowing technique described above adds many bits of overhead per bit of original message. To make the transmission more efficient, Alice can process her message with anall-or-nothing transformand then send it out in much larger chunks. The chaff packets will have to be modified accordingly. Because the original message can be reconstructed only by knowing all of its chunks, Charles needs to send only enough chaff packets to make finding the correct combination of packets computationally infeasible. Chaffing and winnowing lends itself especially well to use inpacket-switched networkenvironments such as theInternet, where each message (whose payload is typically small) is sent in a separate network packet. In another variant of the technique, Charles carefully interleaves packets coming from multiple senders. That eliminates the need for Charles to generate and inject bogus packets in the communication. However, the text of Alice's message cannot be well protected from other parties who are communicating via Charles at the same time. This variant also helps protect againstinformation leakageandtraffic analysis.[citation needed] Ron Rivest suggests that laws related to cryptography, including export controls, would not apply tochaffing and winnowingbecause it does not employ any encryption at all.[1] The power to authenticate is in many cases the power to control, and handing all authentication power to the government is beyond all reason The author of the paper proposes that the security implications of handing everyone's authentication keys to the government for law-enforcement purposes would be far too risky, since possession of the key would enable someone to masquerade and communicate as another entity, such as an airline controller. Furthermore, Ron Rivest contemplates the possibility of rogue law enforcement officials framing up innocent parties by introducing the chaff into their communications, concluding that drafting a law restrictingchaffing and winnowingwould be far too difficult.[1] The termwinnowingwas suggested by Ronald Rivest's father. Before the publication of Rivest's paper in 1998 other people brought to his attention a 1965 novel,Rex Stout'sThe Doorbell Rang, which describes the same concept and was thus included in the paper's references.[1]
https://en.wikipedia.org/wiki/Chaffing_and_winnowing
Incryptography,ciphertext stealing(CTS) is a general method of using ablock cipher mode of operationthat allows for processing of messages that are not evenly divisible into blocks without resulting in any expansion of theciphertext, at the cost of slightly increased complexity. Ciphertext stealing is a technique for encryptingplaintextusing a block cipher, withoutpaddingthe message to a multiple of the block size, so the ciphertext is the same size as the plaintext. It does this by altering processing of the last two blocks of the message. The processing of all but the last two blocks is unchanged, but a portion of thesecond-to-last block's ciphertext is "stolen" to pad the last plaintext block. The padded final block is then encrypted as usual. The final ciphertext, for the last two blocks, consists of the partial penultimate block (with the "stolen" portion omitted) plus the full final block, which are the same size as the original plaintext. Decryption requires decrypting the final block first, then restoring the stolen ciphertext to the penultimate block, which can then be decrypted as usual. In principle any block-orientedblock cipher mode of operationcan be used, but stream-cipher-like modes can already be applied to messages of arbitrary length without padding, so they do not benefit from this technique. The commonmodes of operationthat are coupled with ciphertext stealing areElectronic Codebook(ECB) andCipher Block Chaining(CBC). Ciphertext stealing for ECB mode requires the plaintext to be longer than oneblock. A possibleworkaroundis to use a stream cipher-likeblock cipher mode of operationwhen the plaintext length is oneblockor less, such as the CTR, CFB or OFB modes. Ciphertext stealing forCBCmode doesn't necessarily require the plaintext to be longer than oneblock. In the case where the plaintext is one block long or less, theInitialization vector(IV) can act as the prior block of ciphertext. In this case a modified IV must be sent to the receiver. This may not be possible in situations where the IV can not be freely chosen by the sender when the ciphertext is sent (e.g., when the IV is a derived or pre-established value), and in this case ciphertext stealing for CBC mode can only occur in plaintexts longer than one block. To implement CTS encryption or decryption for data of unknown length, the implementation must delay processing (and buffer) the two most recent blocks of data, so that they can be properly processed at the end of the data stream. There are several different ways to arrange the ciphertext for transmission. The ciphertext bits are the same in all cases, just transmitted in a different order, so the choice has no security implications; it is purely one of implementation convenience. The numbering here is taken from Dworkin, who describes them all. The third is the most popular, and described byDaemenandSchneier; Meyer describes a related, but incompatible scheme (with respect to bit ordering and key use). Arguably the most obvious way to arrange the ciphertext is to transmit the truncated penultimate block, followed by the full final block. This is not convenient for the receiver for two reasons: This does have the advantage that, if the final plaintext block happens to be a multiple of the block size, the ciphertext is identical to that of the original mode of operation without ciphertext stealing. It is often more convenient to swap the final two ciphertext blocks, so the ciphertext ends with the full final block, followed by the truncated penultimate block. This results in naturally aligned ciphertext blocks. In order to maintain compatibility with the non-stealing modes, option CS2 performs this swap only if the amount of stolen ciphertext is non-zero, i.e. the original message was not a multiple of the block size. This maintains natural alignment, and compatibility with the non-stealing modes, but requires treating the cases of aligned and unaligned message size differently. The most popular alternative swaps the final two ciphertext blocks unconditionally. This is the ordering used in the descriptions below. In order to encrypt or decrypt data, use the standardblock cipher mode of operationon all but the last two blocks of data. The following steps describe how to handle the last two blocks of the plaintext, calledPn−1andPn, where the length ofPn−1equals the block size of the cipher in bits,B; the length of the last block,Pn, isMbits; andKis the key that is in use.Mcan range from 1 toB, inclusive, soPncould possibly be a complete block. The CBC mode description also makes use of the ciphertext block just previous to the blocks concerned,Cn−2, which may in fact be the IV if the plaintext fits within two blocks. For this description, the following functions and operators are used: Ciphertext stealing in ECB mode introduces an inter-block dependency within the last two blocks, resulting in altered error propagation behavior for the last two blocks. A bit error in the transmission ofCn−1would result in the block-wide corruption of bothPn−1andPn. A bit error in the transmission ofCnwould result in the block-wide corruption ofPn−1. This is a significant change from ECB's error propagation behavior. In CBC, there is already interaction between processing of different adjacent blocks, so CTS has less conceptual impact in this mode. Error propagation is affected. For CBC ciphertext stealing, there is a clever (but opaque) method of implementing the described ciphertext stealing process using a standard CBC interface. Using this method imposes a performance penalty in the decryption stage of one extra block decryption operation over what would be necessary using a dedicated implementation. A bit error in the transmission ofCn−1would result in the block-wide corruption of bothPn−1andPn. A bit error in the transmission ofCnwould result in a corresponding bit error inPn, and in the block-wide corruption ofPn−1.
https://en.wikipedia.org/wiki/Ciphertext_stealing
Incryptography, apadded uniform random bloborPURBis a discipline for encrypted data formats designed to minimize unintended information leakage either from its encryption format metadata or from its total length.[1] When properly created, a PURB's content is indistinguishable from auniform randombit string to any observer without a relevant decryption key. A PURB therefore leaksnoinformation through headers or other cleartext metadata associated with the encrypted data format. This leakage minimization "hygiene" practice contrasts with traditional encrypted data formats such asPretty Good Privacy, which include cleartext metadata encoding information such as the application that created the data, the data format version, the number of recipients the data is encrypted for, the identities or public keys of the recipients, and the ciphers or suites that were used to encrypt the data. While such encryption metadata was considered non-sensitive when these encrypted formats were designed, modern attack techniques have found numerous ways to employ such incidentally-leaked metadata in facilitating attacks, such as by identifying data encrypted with weak ciphers or obsolete algorithms, fingerprinting applications to track users or identify software versions with known vulnerabilities, ortraffic analysistechniques such as identifying all users, groups, and associated public keys involved in a conversation from an encrypted message observed between only two of them. In addition, a PURB ispaddedto a constrained set of possible lengths, in order to minimize the amount ofinformationthe encrypted data could potentially leak to observers via its total length. Without padding, encrypted objects such as files or bit strings up toM{\displaystyle M}bits in length can leak up toO(log⁡M){\displaystyle O(\log M)}bits of information to an observer - namely the number of bits required to represent the length exactly. A PURB is padded to a length representable in afloating point numberwhose mantissa is no longer (i.e., contains no more significant bits) than its exponent. This constraint limits the maximum amount of information a PURB's total length can leak toO(log⁡log⁡M){\displaystyle O(\log \log M)}bits, a significant asymptotic reduction and the best achievable in general for variable-length encrypted formats whose multiplicative overhead is limited to a constant factor of the unpadded payload size. This asymptotic leakage is the same as one would obtain by padding encrypted objects to a power of some base, such as to a power of two. Allowing some significant mantissa bits in the length's representation rather than just an exponent, however, significantly reduces theoverheadof padding. For example, padding to the next power of two can impose up to 100% overhead by nearly doubling the object's size, while a PURB's padding imposes overhead of at most 12% for small strings and decreasing gradually (to 6%, 3%, etc.) as objects get larger. Experimental evidence indicate that on data sets comprising objects such as files, software packages, and online videos, leaving objects unpadded or padding to a constant block size often leaves them uniquely identifiable by total length alone.[2][3][1]Padding objects to a power of two or to a PURB length, in contrast, ensures that most objects are indistinguishable from at least some other objects and thus have a nontrivialanonymity set.[1] Because a PURB is a discipline for designing encrypted formats and not a particular encrypted format, there is no single prescribed method for encoding or decoding PURBs. Applications may use any encryption and encoding scheme provided it produces a bit string that appears uniformly random to an observer without an appropriate key, provided the appropriatehardness assumptionsare satisfied of course, and provided the PURB is padded to one of the allowed lengths. Correctly-encoded PURBs thereforedo not identify the application that created themin their ciphertext. A decoding application, therefore, cannot readily tell before decryption whether a PURB was encrypted for that application or its user, other than by trying to decrypt it with any available decryptionkeys. Encoding and decoding a PURB presents technical efficiency challenges, in that traditionalparsingtechniques are not applicable because a PURB by definition has no metadatamarkersthat a traditional parser could use to discern the PURB's structure before decrypting it. Instead, a PURB must bedecrypted firstobliviously to its internal structure, and then parsed only after the decoder has used an appropriate decryption key to find a suitable cryptographicentrypointinto the PURB. Encoding and decoding PURBs intended to be decrypted by several different recipients, public keys, and/or ciphers presents the additional technical challenge that each recipient must find a different entrypoint at a distinct location in the PURB non-overlapping with those of the other recipients, but the PURB presents no cleartext metadata indicating the positions of those entrypoints or even the total number of them. The paper that proposed PURBs[1]also included algorithms for encrypting objects to multiple recipients using multiple cipher suites. With these algorithms, recipients can find their respective entrypoints into the PURB with only a logarithmic number oftrial decryptionsusingsymmetric-keycryptography and only one expensivepublic-keyoperation per cipher suite. A third technical challenge is representing the public-key cryptographic material that needs to be encoded into each entrypoint in a PURB, such as the ephemeralDiffie-Hellmanpublic key a recipient needs to derive the shared secret, in an encoding indistinguishable from uniformly random bits. Because the standard encodings ofelliptic-curvepoints are readily distinguishable from random bits, for example, specialindistinguishableencoding algorithms must be used for this purpose, such as Elligator[4]and its successors.[5][6] The primary privacy advantage that PURBs offer is a strong assurance that correctly-encrypted data leaks nothing incidental via internal metadata that observers might readily use to identify weaknesses in the data or software used to produce it, or to fingerprint the application or user that created the PURB. This privacy advantage can translate into a security benefit for data encrypted with weak or obsolete ciphers, or by software with known vulnerabilities that an attacker might exploit based on trivially-observable information gleaned from cleartext metadata. A primary disadvantage of the PURB encryption discipline is the complexity of encoding and decoding, because the decoder cannot rely on conventionalparsingtechniques before decryption. A secondary disadvantage is theoverheadthat padding adds, although the padding scheme proposed for PURBs incurs at most only a few percent overhead for objects of significant size. The Padme padding proposed in the PURB paper only creates files of specific very distinct sizes. Thus, an encrypted file may often be identified as PURB encrypted with high confidence, as the probability of any other file having exactly one of those padded sizes is very low. Another padding problem occurs with very short messages, where the padding does not effectively hide the size of the content. One critique of incurring the complexity and overhead costs of PURB encryption is that thecontextin which a PURB is stored or transmitted may often leak metadata about the encrypted content anyway, and such metadata is outside of the encryption format's purview or control and thus cannot be addressed by the encryption format alone. For example, an application's or user's choice of filename and directory in which to store a PURB on disk may indicate allow an observer to infer the application that likely created it and to what purpose, even if the PURB's data content itself does not. Similarly, encrypting an E-mail's body as a PURB instead of with traditionalPGPorS/MIMEformat may eliminate the encryption format's metadata leakage, but cannot prevent information leakage from the cleartext E-mail headers, or from the endpoint hosts and E-mail servers involved in the exchange. Nevertheless, separate but complementary disciplines are typically available to limit such contextual metadata leakage, such as appropriate file naming conventions or use ofpseudonymous E-mail addressesfor sensitive communications.
https://en.wikipedia.org/wiki/PURB_(cryptography)
Incryptography,Russian copulationis a method of rearrangingplaintextbeforeencryptionso as to concealstereotyped headers, salutations, introductions, endings, signatures, etc.This obscures clues for acryptanalyst, and can be used to increase cryptanalytic difficulty in naive cryptographic schemes (however, most modern schemes contain more rigorous defences; seeciphertext indistinguishability). This is of course desirable for those sending messages and wishing them to remain confidential.Paddingis another technique for obscuring such clues. The technique is to break the starting plaintext message into two parts and then to invert the order of the parts (similar tocircular shift). This puts all endings and beginnings (presumably the location of mostboilerplatephrases) "somewhere in the middle" of the version of the plaintext that is actually encrypted. For some messages, mostly those not in a human language (e.g., images or tabular data), the decrypted version of the plaintext will present problems when reversing the inversion. For messages expressed in ordinary language, there is sufficientredundancythat the inversion can almost always be reversed by a human immediately on inspection.[1] The English phrase suggests that it originally came from an observation about Russian cryptographic practice.[citation needed]However, the technique is generally useful and neither was, nor is, limited to use by Russians.[2] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Russian_copulation
Incryptography,format-preserving encryption(FPE), refers to encrypting in such a way that the output (theciphertext) is in the same format as the input (theplaintext). The meaning of "format" varies. Typically only finite sets of characters are used; numeric, alphabetic or alphanumeric. For example: For such finite domains, and for the purposes of the discussion below, the cipher is equivalent to a permutation ofNintegers{0, ... ,N−1} whereNis the size of the domain. One motivation for using FPE comes from the problems associated with integrating encryption into existing applications, with well-defined data models. A typical example would be acredit card number, such as1234567812345670(16 bytes long, digits only). Adding encryption to such applications might be challenging if data models are to be changed, as it usually involves changing field length limits or data types. For example, output from a typicalblock cipherwould turn credit card number into ahexadecimal(e.g.0x96a45cbcf9c2a9425cde9e274948cb67, 34 bytes, hexadecimal digits) orBase64value (e.g.lqRcvPnCqUJc3p4nSUjLZw==, 24 bytes, alphanumeric and special characters), which will break any existing applications expecting the credit card number to be a 16-digit number. Apart from simple formatting problems, using AES-128-CBC, this credit card number might get encrypted to the hexadecimal value0xde015724b081ea7003de4593d792fd8b695b39e095c98f3a220ff43522a2df02. In addition to the problems caused by creating invalid characters and increasing the size of the data, data encrypted using the CBC mode of an encryption algorithm also changes its value when it is decrypted and encrypted again. This happens because therandom seed valuethat is used to initialize the encryption algorithm and is included as part of the encrypted value is different for each encryption operation. Because of this, it is impossible to use data that has been encrypted with the CBC mode as aunique keyto identify a row in a database. FPE attempts to simplify the transition process by preserving the formatting and length of the original data, allowing a drop-in replacement of plaintext values with their ciphertexts in legacy applications. Although a truly random permutation is the ideal FPE cipher, for large domains it is infeasible to pre-generate and remember a truly random permutation. So the problem of FPE is to generate a pseudorandom permutation from a secret key, in such a way that the computation time for a single value is small (ideally constant, but most importantly smaller thanO(N)). Ann-bit block cipher technicallyisa FPE on the set{0, ..., 2n-1}. If an FPE is needed on one of these standard sized sets (for example,n= 64 forDESandn= 128 for AES) a block cipher of the right size can be used. However, in typical usage, a block cipher is used in amode of operationthat allows it to encrypt arbitrarily long messages, and with aninitialization vectoras discussed above. In this mode, a block cipher is not an FPE. In cryptographic literature (see most of the references below), the measure of a "good" FPE is whether an attacker can distinguish the FPE from a truly random permutation. Various types of attackers are postulated, depending on whether they have access to oracles or known ciphertext/plaintext pairs. In most of the approaches listed here, a well-understoodblock cipher(such asAES) is used as a primitive to take the place of an ideal random function. This has the advantage that incorporation of a secret key into the algorithm is easy. Where AES is mentioned in the following discussion, any other good block cipher would work as well. Implementing FPE with security provably related to that of the underlying block cipher was first undertaken in a paper by cryptographersJohn BlackandPhillip Rogaway,[1]which described three ways to do this. They proved that each of these techniques is as secure as the block cipher that is used to construct it. This means that if the AES algorithm is used to create an FPE algorithm, then the resulting FPE algorithm is as secure as AES because an adversary capable of defeating the FPE algorithm can also defeat the AES algorithm. Therefore, if AES is secure, then the FPE algorithms constructed from it are also secure. In all of the following,Edenotes the AES encryption operation that is used to construct an FPE algorithm andFdenotes the FPE encryption operation. One simple way to create an FPE algorithm on{0, ...,N-1}is to assign a pseudorandom weight to each integer, then sort by weight. The weights are defined by applying an existing block cipher to each integer. Black and Rogaway call this technique a "prefix cipher" and showed it was provably as good as the block cipher used. Thus, to create a FPE on the domain {0,1,2,3}, given a keyKapply AES(K) to each integer, giving, for example, Sorting [0,1,2,3] by weight gives [3,1,2,0], so the cipher is This method is only useful for small values ofN. For larger values, the size of the lookup table and the required number of encryptions to initialize the table gets too big to be practical. If there is a setMof allowed values within the domain of a pseudorandom permutationP(for examplePcan be a block cipher like AES), an FPE algorithm can be created from the block cipher by repeatedly applying the block cipher until the result is one of the allowed values (withinM). The recursion is guaranteed to terminate. (BecausePis one-to-one and the domain is finite, repeated application ofPforms a cycle, so starting with a point inMthe cycle will eventually terminate inM.) This has the advantage that the elements ofMdo not have to be mapped to a consecutive sequence {0,...,N-1} of integers. It has the disadvantage, whenMis much smaller thanP's domain, that too many iterations might be required for each operation. IfPis a block cipher of a fixed size, such as AES, this is a severe restriction on the sizes ofMfor which this method is efficient. For example, an application may want to encrypt 100-bit values with AES in a way that creates another 100-bit value. With this technique, AES-128-ECB encryption can be applied until it reaches a value which has all of its 28 highest bits set to 0, which will take an average of 228iterations to happen. It is also possible to make a FPE algorithm using aFeistel network. A Feistel network needs a source of pseudo-random values for the sub-keys for each round, and the output of the AES algorithm can be used as these pseudo-random values. When this is done, the resulting Feistel construction is good if enough rounds are used.[2] One way to implement an FPE algorithm using AES and a Feistel network is to use as many bits of AES output as are needed to equal the length of the left or right halves of the Feistel network. If a 24-bit value is needed as a sub-key, for example, it is possible to use the lowest 24 bits of the output of AES for this value. This may not result in the output of the Feistel network preserving the format of the input, but it is possible to iterate the Feistel network in the same way that the cycle-walking technique does to ensure that format can be preserved. Because it is possible to adjust the size of the inputs to a Feistel network, it is possible to make it very likely that this iteration ends very quickly on average. In the case of credit card numbers, for example, there are 1015possible 16-digit credit card numbers (accounting for the redundantcheck digit), and because the 1015≈ 249.8, using a 50-bit wide Feistel network along with cycle walking will create an FPE algorithm that encrypts fairly quickly on average. A Thorp shuffle is like an idealized card-shuffle, or equivalently a maximally-unbalanced Feistel cipher where one side is a single bit. It is easier to prove security for unbalanced Feistel ciphers than for balanced ones.[3] For domain sizes that are a power of two, and an existing block cipher with a smaller block size, a new cipher may be created using VIL mode as described by Bellare, Rogaway.[4] TheHasty Pudding Cipheruses custom constructions (not depending on existing block ciphers as primitives) to encrypt arbitrary finite small domains. The FFSEM mode of AES (specification[5]) that has been accepted for consideration by NIST uses the Feistel network construction of Black and Rogaway described above, with AES for the round function, with one slight modification: a single key is used and is tweaked slightly for each round. As of February 2010, FFSEM has been superseded by the FFX mode written byMihir Bellare, Phillip Rogaway, and Terence Spies. (specification,[6][7]NIST Block Cipher Modes Development, 2010). InJPEG 2000standard, the marker codes (in the range 0xFF90 through 0xFFFF) should not appear in the plaintext and ciphertext. The simple modular-0xFF90 technique cannot be applied to solve the JPEG 2000 encryption problem. For example, the ciphertext words 0x23FF and 0x9832 are valid, but their combination 0x23FF9832 becomes invalid since it introduces the marker code 0xFF98. Similarly, the simple cycle-walking technique cannot be applied to solve the JPEG2000 encryption problem since two valid ciphertext blocks may give invalid ciphertext when they get combined. For example, if the first ciphertext block ends with bytes "...30FF" and the second ciphertext block starts with bytes "9832...", then the marker code "0xFF98" would appear in the ciphertext. Two mechanisms for format-preserving encryption of JPEG 2000 were given in the paper "Efficient and Secure Encryption Schemes for JPEG2000"[8]by Hongjun Wu and Di Ma. To perform format-preserving encryption of JPEG 2000, the technique is to exclude the byte "0xFF" in the encryption and decryption. Then a JPEG 2000 encryption mechanism performs modulo-n addition with stream cipher; another JPEG 2000 encryption mechanism performs the cycle-walking technique with block cipher. Several FPE constructs are based on adding the output of a standard cipher, modulo n, to the data to be encrypted, with various methods of unbiasing the result. The modulo-n addition shared by many of the constructs is the immediately obvious solution to the FPE problem (thus its use in a number of cases), with the main differences being the unbiasing mechanisms used. Section 8 of theFIPS74,Federal Information Processing Standards Publication 1981 Guidelines for Implementing and Using the NBS Data Encryption Standard,[9]describes a way to use the DES encryption algorithm in a manner that preserves the format of the data via modulo-n addition followed by an unbiasing operation. This standard was withdrawn on May 19, 2005, so the technique should be considered obsolete in terms of being a formal standard. Another early mechanism for format-preserving encryption wasPeter Gutmann's "Encrypting data with a restricted range of values"[10]which again performs modulo-n addition on any cipher with some adjustments to make the result uniform, with the resulting encryption being as strong as the underlying encryption algorithm on which it is based. The paper "Using Datatype-Preserving Encryption to Enhance Data Warehouse Security"[11]by Michael Brightwell and Harry Smith describes a way to use theDESencryption algorithm in a way that preserves the format of the plaintext. This technique doesn't appear to apply an unbiasing step as do the other modulo-n techniques referenced here. The paper "Format-Preserving Encryption"[12]byMihir Bellareand Thomas Ristenpart describes using "nearly balanced" Feistel networks to create secure FPE algorithms. The paper "Format Controlling Encryption Using Datatype Preserving Encryption"[13]by Ulf Mattsson describes other ways to create FPE algorithms. An example of FPE algorithm is FNR (Flexible Naor and Reingold).[14] NIST Special Publication 800-38G, "Recommendation for Block Cipher Modes of Operation: Methods for Format-Preserving Encryption"[15]specifies two methods: FF1 and FF3. Details on the proposals submitted for each can be found at the NIST Block Cipher Modes Development site,[16]including patent and test vector information. Sample values are available for both FF1 and FF3.[17] Another mode was included in the draft NIST guidance but was removed before final publication. Korea has also developed a FPE standard, FEA-1 and FEA-2. Open Source implementations of FF1 and FF3 are publicly available inC language,Go language,Java,Node.js,Python,C#/.NetandRust
https://en.wikipedia.org/wiki/Format-Preserving_Encryption
Incomputational number theoryandcomputational algebra,Pollard's kangaroo algorithm(alsoPollard's lambda algorithm, seeNamingbelow) is analgorithmfor solving thediscrete logarithmproblem. The algorithm was introduced in 1978 by the number theoristJohn M. Pollard, in the same paper as his better-knownPollard's rho algorithmfor solving the same problem.[1][2]Although Pollard described the application of his algorithm to the discrete logarithm problem in the multiplicative group of units modulo a primep, it is in fact a generic discrete logarithm algorithm—it will work in anyfinite cyclic group. SupposeG{\displaystyle G}is a finite cyclic group of ordern{\displaystyle n}which is generated by the elementα{\displaystyle \alpha }, and we seek to find the discrete logarithmx{\displaystyle x}of the elementβ{\displaystyle \beta }to the baseα{\displaystyle \alpha }. In other words, one seeksx∈Zn{\displaystyle x\in Z_{n}}such thatαx=β{\displaystyle \alpha ^{x}=\beta }. The lambda algorithm allows one to search forx{\displaystyle x}in some interval[a,…,b]⊂Zn{\displaystyle [a,\ldots ,b]\subset Z_{n}}. One may search the entire range of possible logarithms by settinga=0{\displaystyle a=0}andb=n−1{\displaystyle b=n-1}. 1. Choose a setS{\displaystyle S}of positive integers of mean roughlyb−a{\displaystyle {\sqrt {b-a}}}and define apseudorandommapf:G→S{\displaystyle f:G\rightarrow S}. 2. Choose an integerN{\displaystyle N}and compute a sequence of group elements{x0,x1,…,xN}{\displaystyle \{x_{0},x_{1},\ldots ,x_{N}\}}according to: 3. Compute Observe that: 4. Begin computing a second sequence of group elements{y0,y1,…}{\displaystyle \{y_{0},y_{1},\ldots \}}according to: and a corresponding sequence of integers{d0,d1,…}{\displaystyle \{d_{0},d_{1},\ldots \}}according to: Observe that: 5. Stop computing terms of{yi}{\displaystyle \{y_{i}\}}and{di}{\displaystyle \{d_{i}\}}when either of the following conditions are met: Pollard gives the time complexity of the algorithm asO(b−a){\displaystyle O({\sqrt {b-a}})}, using a probabilistic argument based on the assumption thatf{\displaystyle f}acts pseudorandomly. Sincea,b{\displaystyle a,b}can be represented usingO(log⁡b){\displaystyle O(\log b)}bits, this is exponential in the problem size (though still a significant improvement over the trivial brute-force algorithm that takes timeO(b−a){\displaystyle O(b-a)}). For an example of asubexponential timediscrete logarithm algorithm, see theindex calculus algorithm. The algorithm is well known by two names. The first is "Pollard's kangaroo algorithm". This name is a reference to an analogy used in the paper presenting the algorithm, where the algorithm is explained in terms of using atamekangarooto trap awildkangaroo. Pollard has explained[3]that this analogy was inspired by a "fascinating" article published in the same issue ofScientific Americanas an exposition of theRSApublic key cryptosystem. The article[4]described an experiment in which a kangaroo's "energetic cost of locomotion, measured in terms of oxygen consumption at various speeds, was determined by placing kangaroos on atreadmill". The second is "Pollard's lambda algorithm". Much like the name of another of Pollard's discrete logarithm algorithms,Pollard's rho algorithm, this name refers to the similarity between a visualisation of the algorithm and theGreek letterlambda(λ{\displaystyle \lambda }). The shorter stroke of the letter lambda corresponds to the sequence{xi}{\displaystyle \{x_{i}\}}, since it starts from the position b to the right of x. Accordingly, the longer stroke corresponds to the sequence{yi}{\displaystyle \{y_{i}\}}, which "collides with" the first sequence (just like the strokes of a lambda intersect) and then follows it subsequently. Pollard has expressed a preference for the name "kangaroo algorithm",[5]as this avoids confusion with some parallel versions of his rho algorithm, which have also been called "lambda algorithms".
https://en.wikipedia.org/wiki/Pollard%27s_kangaroo_algorithm
passwdis acommandonUnix,Plan 9,Inferno, and mostUnix-likeoperating systemsused to change a user'spassword. The password entered by the user is run through akey derivation functionto create ahashed versionof the new password, which is saved. Only the hashed version is stored; the entered password is not saved for security reasons. When the user logs on, the password entered by the user during the log on process is run through the same key derivation function and the resulting hashed version is compared with the saved version. If the hashes are identical, the entered password is considered to be correct, and the user is authenticated. In theory, it is possible for two different passwords toproduce the same hash. However,cryptographic hash functionsare designed in such a way that finding any password that produces the same hash is very difficult and practically infeasible, so if the produced hash matches the stored one, the user can be authenticated.[1] The passwd command may be used to change passwords for local accounts, and on most systems, can also be used to change passwords managed in a distributed authentication mechanism such asNIS,Kerberos, orLDAP. The/etc/passwdfile is a text-based database of information aboutusersthat maylog intothe system or other operating system user identities that own running processes. In many operating systems, this file is just one of many possible back-ends for the more generalpasswd name service. The file's name originates from one of its initial functions as it contained the data used to verifypasswordsof user accounts. However, on modernUnixsystems the security-sensitive password information is instead often stored in a different file using shadow passwords, or other database implementations. The/etc/passwdfile typically hasfile system permissionsthat allow it to be readable by all users of the system (world-readable), although it may only be modified by thesuperuseror by using a few special purpose privileged commands. The/etc/passwdfile is atext filewith one record perline, each describing auser account. Each record consists of seven fields separated bycolons. The ordering of the records within the file is generally unimportant. An example record may be: The fields, in order from left to right, are:[2] /etc/shadowis used to increase the security level of passwords by restricting all but highly privileged users' access to hashed password data. Typically, that data is kept in files owned by and accessible only by thesuper user.[5] Systems administrators can reduce the likelihood of brute-force attacks by making the list of hashed passwords unreadable by unprivileged users. The obvious way to do this is to make thepasswddatabase itself readable only by the root user. However, this would restrict access to other data in the file such as username-to-userid mappings, which would break many existing utilities and provisions. One solution is a "shadow" password file to hold the password hashes separate from the other data in the world-readablepasswdfile. For local files, this is usually/etc/shadowonLinuxand Unix systems, or/etc/master.passwdonBSDsystems; each is readable only byroot. (Root access to the data is considered acceptable since on systems with the traditional "all-powerful root" security model, the root user would be able to obtain the information in other ways in any case). Virtually all recentUnix-likeoperating systems use shadowed passwords. The shadow password file does not entirely solve the problem of attacker access to hashed passwords, as some network authentication schemes operate by transmitting the hashed password over the network (sometimes incleartext, e.g.,Telnet[6]), making it vulnerable to interception. Copies of system data, such as system backups written to tape or optical media, can also become a means for illicitly obtaining hashed passwords. In addition, the functions used by legitimate password-checking programs need to be written in such a way that malicious programs cannot make rapid authentication checks. Regardless of whether password shadowing is in effect on a given system, the passwd file is readable by all users so that various system utilities (e.g.,grep) can work (e.g., to ensure that user names existing on the system can be found inside the file), while only the root user can write to it. Without password shadowing, this means that an attacker with unprivileged access to the system can obtain the hashed form of every user's password. Those values can be used to mount abrute force attackoffline, testing possible passwords against the hashed passwords relatively quickly without alerting system security arrangements designed to detect an abnormal number of failedloginattempts. Especially when the hash is not salted it is also possible to look up these hashed passwords inrainbow tables, databases specially made for giving back a password for a unique hash. With a shadowed password scheme in use, the/etc/passwdfile typically shows a character such as '*', or 'x' in the password field for each user instead of the hashed password, and/etc/shadowusually contains the following user information: The format of the shadow file is simple, and basically identical to that of the password file, to wit, one line per user, ordered fields on each line, and fields separated by colons. Many[quantify]systems require the order of user lines in the shadow file be identical to the order of the corresponding users in the password file. Prior to password shadowing, a Unix user's hashed password was stored in the second field of their record in the/etc/passwdfile (within the seven-field format as outlined above). Password shadowing first appeared in Unix systems with the development ofSunOSin the mid-1980s,[12]System VRelease 3.2 in 1988 andBSD4.3 Reno in 1990. But, vendors who had performed ports from earlier UNIX releases did not always include the new password shadowing features in their releases, leaving users of those systems exposed to password file attacks. System administrators may also arrange for the storage of passwords in distributed databases such asNISandLDAP, rather than in files on each connected system. In the case of NIS, the shadow password mechanism is often still used on the NIS servers; in other distributed mechanisms the problem of access to the various user authentication components is handled by the security mechanisms of the underlying data repository. In 1987, the author of the originalShadow Password Suite, Julie Haugh, experienced a computer break-in and wrote the initial release of the Shadow Suite containing thelogin,passwdandsucommands. The original release, written for the SCOXenixoperating system, quickly got ported to other platforms. The Shadow Suite was ported toLinuxin 1992 one year after the original announcement of the Linux project, and was included in many early distributions, and continues to be included in many currentLinuxdistributions. In the past, it was necessary to have different commands to change passwords in different authentication schemes. For example, the command to change a NIS password wasyppasswd. This required users to be aware of the different methods to change passwords for different systems, and also resulted in wasteful duplication of code in the various programs that performed the same functions with differentback ends. In most implementations, there is now a single passwd command, and the control of where the password is actually changed is handled transparently to the user viapluggable authentication modules(PAMs). For example, the type of hash used is dictated by the configuration of thepam_unix.somodule. By default, theMD5hash has been used, while current modules are also capable of stronger hashes such asblowfish,SHA256andSHA512.
https://en.wikipedia.org/wiki/Passwd
TheSecure Hash Algorithmsare a family ofcryptographic hash functionspublished by theNational Institute of Standards and Technology(NIST) as aU.S.Federal Information Processing Standard(FIPS), including: The corresponding standards areFIPSPUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS). In the table below,internal statemeans the "internal hash sum" after each compression of a data block. All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by theCMVP(Cryptographic Module Validation Program), a joint program run by the AmericanNational Institute of Standards and Technology(NIST) and the CanadianCommunications Security Establishment(CSE).
https://en.wikipedia.org/wiki/Secure_Hash_Algorithms
Couchbase Server, originally known asMembase, is asource-available,[2]distributed (shared-nothing architecture)multi-modelNoSQLdocument-oriented databasesoftware package optimized for interactive applications. These applications may serve manyconcurrent usersby creating, storing, retrieving, aggregating, manipulating and presenting data. In support of these kinds of application needs, Couchbase Server is designed to provide easy-to-scale key-value, or JSON document access, with low latency and high sustainability throughput. It is designed to beclusteredfrom a single machine to very large-scale deployments spanning many machines. Couchbase Server provided client protocol compatibility withmemcached,[3]but added diskpersistence,data replication, live cluster reconfiguration, rebalancing andmultitenancywithdata partitioning. Membase was developed by several leaders of thememcachedproject, who had founded a company, NorthScale, to develop akey-value storewith the simplicity, speed, and scalability of memcached, but also the storage, persistence and querying capabilities of a database. The original membase source code was contributed by NorthScale, and project co-sponsorsZyngaandNaver Corporation(then known as NHN) to a new project on membase.org in June 2010.[4] On February 8, 2011, the Membase project founders and Membase, Inc. announced a merger with CouchOne (a company with many of the principal players behindCouchDB) with an associated project merger. The merged company was calledCouchbase, Inc.In January 2012, Couchbase released Couchbase Server 1.8. In September of 2012,Orbitzsaid it had changed some of its systems to use Couchbase.[5]In December of 2012, Couchbase Server 2.0 (announced in July 2011) was released and included a newJSONdocument store, indexing and querying, incrementalMapReduceandreplicationacrossdata centers.[6][7] Every Couchbase node consists of a data service, index service, query service, and cluster manager component. Starting with the 4.0 release, the three services can be distributed to run on separate nodes of the cluster if needed. In the parlance of Eric Brewer'sCAP theorem, Couchbase is normally a CP type system meaning it providesconsistencyandpartition tolerance, or it can be set up as an AP system with multiple clusters. The cluster manager supervises the configuration and behavior of all the servers in a Couchbase cluster. It configures and supervises inter-node behavior like managing replication streams and re-balancing operations. It also provides metric aggregation and consensus functions for the cluster, and aRESTfulcluster management interface. The cluster manager uses theErlang programming languageand theOpen Telecom Platform. Data replicationwithin the nodes of a cluster can be controlled with several parameters. In December of 2012, support was added for replication between differentdata centers.[6] The data manager stores and retrieves documents in response to data operations from applications. It asynchronously writes data to disk after acknowledging to the client. In version 1.7 and later, applications can optionally ensure data is written to more than one server or to disk before acknowledging a write to the client. Parameters define item ages that affect when data is persisted, and how max memory and migration from main-memory to disk is handled. It supports working sets greater than a memory quota per "node" or "bucket". External systems can subscribe to filtered data streams, supporting, for example,full text searchindexing,data analyticsor archiving.[8] A document is the most basic unit of data manipulation in Couchbase Server. Documents are stored in JSON document format with no predefined schemas. Non-JSON documents can also be stored in Couchbase Server (binary, serialized values, XML, etc.) Couchbase Server includes a built-in multi-threaded object-managedcachethat implements memcached compatible APIs such as get, set, delete, append, prepend etc. Couchbase Server has a tail-append storage design that is immune to data corruption,OOM killersor sudden loss of power. Data is written to the data file in an append-only manner, which enables Couchbase to do mostly sequential writes for update, and provide an optimized access patterns for disk I/O. A performance benchmark done byAltorosin 2012, compared Couchbase Server with other technologies.[9]Cisco Systemspublished a benchmark that measured the latency and throughput of Couchbase Server with a mixed workload in 2012.[10] Couchbase Server is a packaged version of Couchbase'sopen source softwaretechnology and is available in a community edition without recent bug fixes with an Apache 2.0 license[11]and an edition for commercial use.[12]Couchbase Server builds are available for Ubuntu, Debian, Red Hat, SUSE, Oracle Linux,Microsoft Windowsand macOS operating systems. Couchbase has supported software developers' kits for the programming languages.NET,PHP,Ruby,Python,C,Node.js,Java,Go, andScala. Aquery languagecalled SQL++ (formerly called N1QL), is used for manipulating the JSON data in Couchbase, just like SQL manipulates data in RDBMS. It has SELECT, INSERT, UPDATE, DELETE, MERGE statements to operate on JSON data. It was initially announced in March 2015 as "SQL for documents".[13] The SQL++data modelisnon-first normal form(N1NF) with support for nested attributes and domain-orientednormalization. The SQL++ data model is also a proper superset and generalization of therelational model. Couchbase Mobile / Couchbase Lite is amobile databaseproviding data replication.[14] Couchbase Lite(originally TouchDB) provides native libraries for offline-first NoSQL databases with built-inpeer-to-peerorclient-serverreplication mechanisms.[15]Sync Gatewaymanages secure access and synchronization of data between Couchbase Lite and Couchbase Server.[16] Couchbase Lite added support forVector Searchin version 3.2,[17]allowing cloud to edge support for vector search in mobile applications. Couchbase began as an evolution ofMemcached, a high-speed data cache, and can be used as a drop-in replacement for Memcached, providing high availability for memcached application without code changes.[18] Couchbase is used to support applications where a flexible data model, easy scalability, and consistent high performance are required, such as tracking real-time user activity or providing a store of user preferences or online applications.[19] Couchbase Mobile, which stores data locally on devices (usually mobile devices) is used to create “offline-first” applications that can operate when a device is not connected to a network and synchronize with Couchbase Server once a network connection is re-established.[20] The Catalyst Lab atNorthwestern Universityuses Couchbase Mobile to support the Evo application, a healthy lifestyle research program where data is used to help participants improve dietary quality, physical activity, stress, or sleep.[21] Amadeususes Couchbase withApache Kafkato support their “open, simple, and agile” strategy to consume and integrate data on loyalty programs for airline and other travel partners. High scalability is needed when disruptive travel events create a need to recognize and compensate high value customers.[22] Starting in 2012, it played a role inLinkedIn's caching systems, includingbackendcachingfor recruiter and jobs products, counters for security defense mechanisms, for internal applications.[23] For caching, Couchbase competes withMemcachedandRedis. For document databases, Couchbase competes with otherdocument-oriented databasesystems. It is commonly compared withMongoDB,Amazon DynamoDB,Oracle RDBMS,DataStax,Google Bigtable,MariaDB,IBM Cloudant,Redis Enterprise,SingleStore, andMarkLogic.[24][25]
https://en.wikipedia.org/wiki/Couchbase_Server
Memcached(pronounced variously /mɛmkæʃˈdiː/mem-cash-deeor /ˈmɛmkæʃt/mem-cashed) is a general-purpose distributedmemory-cachingsystem. It is often used to speed up dynamicdatabase-driven websites by caching data andobjectsinRAMto reduce the number of times an external data source (such as a database or API) must be read. Memcached isfree and open-source software, licensed under theRevised BSD license.[2]Memcached runs onUnix-likeoperating systems (LinuxandmacOS) and onMicrosoft Windows. It depends on thelibeventlibrary. Memcached'sAPIsprovide a very largehash tabledistributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged inleast recently used(LRU) order.[3][4]Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database. Memcached has no internal mechanism to track misses which may happen. However, some third party utilities provide this functionality. Memcached was first developed byBrad Fitzpatrickfor his websiteLiveJournal, on May 22, 2003.[5][6]It was originally written inPerl, then later rewritten inCby Anatoly Vorobey, then employed by LiveJournal.[7]Memcached is now used by many other systems, includingYouTube,[8]Reddit,[9]Facebook,[10][11]Pinterest,[12][13]Twitter,[14]Wikipedia,[15]andMethod Studios.[16]Google App Engine,Google Cloud Platform,Microsoft Azure,IBM BluemixandAmazon Web Servicesalso offer a Memcached service through an API.[17][18][19][20] The system uses aclient–serverarchitecture. The servers maintain a key–valueassociative array; the clients populate this array and query it by key. Keys are up to 250 bytes long and values can be at most 1megabytein size. Clients use client-side libraries to contact the servers which, by default, expose their service atport11211. Both TCP and UDP are supported. Each client knows all servers; the servers do not communicate with each other. If a client wishes to set or read the value corresponding to a certain key, the client's library first computes ahashof the key to determine which server to use. This gives a simple form ofshardingand scalableshared-nothing architectureacross the servers. The server computes a second hash of the key to determine where to store or read the corresponding value. The servers keep the values in RAM (and, starting in 1.6.0, in auxiliary cache on disk using an external storage server option);[21]if a server runs out of available memory or disk, it discards the oldest values. Therefore, clients must treat Memcached as a transitory cache; they cannot assume that data stored in Memcached is still there when they need it. Other databases, such asMemcacheDB,Couchbase Server, provide persistent storage while maintaining Memcached protocol compatibility. If all client libraries use the same hashing algorithm to determine servers, then clients can read each other's cached data. A typical deployment has several servers and many clients. However, it is possible to use Memcached on a single computer, acting simultaneously as client and server. The size of its hash table is often very large. It is limited to available memory across all the servers in the cluster of servers in a data center. Where high-volume, wide-audience Web publishing requires it, this may stretch to many gigabytes. Memcached can be equally valuable for situations where either the number of requests for content is high, or the cost of generating a particular piece of content is high. Applications with particularly high-demand caching needs can use a built-in proxy to define and configure complex client-server routes.[21] Most deployments of Memcached are within trusted networks where clients may freely connect to any server. However, sometimes Memcached is deployed in untrusted networks or where administrators want to exercise control over the clients that are connecting. For this purpose Memcached can be compiled with optionalSASLauthentication support. The SASL support requires the binary protocol. A presentation atBlackHat USA 2010revealed that a number of large public websites had left Memcached open to inspection, analysis, retrieval, and modification of data.[22] Even within a trusted organisation, the flat trust model of memcached may have security implications. For efficient simplicity, all Memcached operations are treated equally. Clients with a valid need for access to low-security entries within the cache gain access toallentries within the cache, even when these are higher-security and that client has no justifiable need for them. If the cache key can be either predicted, guessed or found by exhaustive searching, its cache entry may be retrieved. Some attempt to isolate setting and reading data may be made in situations such as high volume web publishing. A farm of outward-facing content servers havereadaccess to memcached containing published pages or page components, but no write access. Where new content is published (and is not yet in memcached), a request is instead sent to content generation servers that are not publicly accessible to create the content unit and add it to memcached. The content server then retries to retrieve it and serve it outwards. In February 2018,CloudFlarereported that misconfigured memcached servers were used to launchDDoS attacksin large scale.[23]The memcached protocol over UDP has a hugeamplification factor, of more than 51000.[24]Victims of the DDoS attacks includeGitHub, which was flooded with 1.35 Tbit/s peak incoming traffic.[25] This issue was mitigated in Memcached version 1.5.6, which disabled UDP protocol by default.[26] Note that all functions described on this page arepseudocodeonly. Memcached calls and programming languages may vary based on the API used. Converting database or object creation queries to use Memcached is simple. Typically, when using straight database queries, example code would be as follows: After conversion to Memcached, the same call might look like the following The client would first check whether a Memcached value with the unique key "userrow:userid" exists, where userid is some number. If the result does not exist, it would select from the database as usual, and set the unique key using the Memcached API add function call. However, if only this API call were modified, the server would end up fetching incorrect data following any database update actions: the Memcached "view" of the data would become out of date. Therefore, in addition to creating an "add" call, an update call would also be needed using the Memcached set function. This call would update the currently cached data to match the new data in the database, assuming the database query succeeds. An alternative approach would be to invalidate the cache with the Memcached delete function, so that subsequent fetches result in a cache miss. Similar action would need to be taken when database records were deleted, to maintain either a correct or incomplete cache. An alternate cache-invalidation strategy is to store a random number in an agreed-upon cache entry and to incorporate this number into all keys that are used to store a particular kind of entry. To invalidate all such entries at once, change the random number. Existing entries (which were stored using the old number) will no longer be referenced and so will eventually expire or be recycled.
https://en.wikipedia.org/wiki/Memcached
Incryptographyandcomputer science, ahash treeorMerkle treeis atreein which every "leaf"nodeis labelled with thecryptographic hashof a data block, and every node that is not a leaf (called abranch,inner node, orinode) is labelled with the cryptographic hash of the labels of its child nodes. A hash tree allows efficient and secure verification of the contents of a largedata structure. A hash tree is a generalization of ahash listand ahash chain. Demonstrating that a leaf node is a part of a given binary hash tree requires computing a number of hashes proportional to thelogarithmof the number of leaf nodes in the tree.[1]Conversely, in a hash list, the number is proportional to the number of leaf nodes itself. A Merkle tree is therefore an efficient example of acryptographic commitment scheme, in which the root of the tree is seen as a commitment and leaf nodes may be revealed and proven to be part of the original commitment.[2] The concept of a hash tree is named afterRalph Merkle, who patented it in 1979.[3][4] Hash trees can be used to verify any kind of data stored, handled and transferred in and between computers. They can help ensure that datablocksreceived from other peers in apeer-to-peer networkare received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks. Hash trees are used in: Suggestions have been made to use hash trees intrusted computingsystems.[11] The initial Bitcoin implementation of Merkle trees bySatoshi Nakamotoapplies the compression step of the hash function to an excessive degree, which is mitigated by using Fast Merkle Trees.[12] A hash tree is atreeofhashesin which the leaves (i.e., leaf nodes, sometimes also called "leafs") are hashes of datablocksin, for instance, a file or set of files. Nodes farther up in the tree are the hashes of their respective children. For example, in the above picturehash 0is the result of hashing theconcatenationofhash 0-0andhash 0-1. That is,hash 0=hash(hash 0-0+hash 0-1) where "+" denotes concatenation. Most hash tree implementations are binary (two child nodes under each node) but they can just as well use many more child nodes under each node. Usually, acryptographic hash functionsuch asSHA-2is used for the hashing. If the hash tree only needs to protect against unintentional damage, unsecuredchecksumssuch asCRCscan be used. In the top of a hash tree there is atop hash(orroot hashormaster hash). Before downloading a file on aP2P network, in most cases the top hash is acquired from a trusted source, for instance a friend or a web site that is known to have good recommendations of files to download. When the top hash is available, the hash tree can be received from any non-trusted source, like any peer in the P2P network. Then, the received hash tree is checked against the trusted top hash, and if the hash tree is damaged or fake, another hash tree from another source will be tried until the program finds one that matches the top hash.[13] The main difference from ahash listis that one branch of the hash tree can be downloaded at a time and the integrity of each branch can be checked immediately, even though the whole tree is not available yet. For example, in the picture, the integrity ofdata block L2can be verified immediately if the tree already containshash 0-0andhash 1by hashing the data block and iteratively combining the result withhash 0-0and thenhash 1and finally comparing the result with thetop hash. Similarly, the integrity ofdata block L3can be verified if the tree already hashash 1-1andhash 0. This can be an advantage since it is efficient to split files up in very small data blocks so that only small blocks have to be re-downloaded if they get damaged. If the hashed file is big, such a hash list or hash chain becomes fairly big. But if it is a tree, one small branch can be downloaded quickly, the integrity of the branch can be checked, and then the downloading of data blocks can start.[citation needed] The Merkle hash root does not indicate the tree depth, enabling asecond-preimage attackin which an attacker creates a document other than the original that has the same Merkle hash root. For the example above, an attacker can create a new document containing two data blocks, where the first ishash 0-0+hash 0-1, and the second ishash 1-0+hash 1-1.[14][15] One simple fix is defined inCertificate Transparency: when computing leaf node hashes, a 0x00 byte is prepended to the hash data, while 0x01 is prepended when computing internal node hashes.[13]Limiting the hash tree size is a prerequisite of someformal security proofs, and helps in making some proofs tighter. Some implementations limit the tree depth using hash tree depth prefixes before hashes, so any extracted hash chain is defined to be valid only if the prefix decreases at each step and is still positive when the leaf is reached. The Tiger tree hash is a widely used form of hash tree. It uses a binary hash tree (two child nodes under each node), usually has a data block size of 1024bytesand uses theTiger hash.[16] Tiger tree hashes are used inGnutella,[17]Gnutella2, andDirect ConnectP2Pfile sharing protocols[18]and infile sharingapplications such asPhex,[19]BearShare,LimeWire,Shareaza,DC++[20]andgtk-gnutella.[21]
https://en.wikipedia.org/wiki/Merkle_tree
Adistributed data storeis acomputer networkwhere information is stored on more than onenode, often in areplicatedfashion.[1]It is usually specifically used to refer to either adistributed databasewhere users store information on anumber of nodes, or acomputer networkin which users store information on anumber of peer network nodes.[2] Distributed databasesare usuallynon-relational databasesthat enable a quick access to data over a large number of nodes. Some distributed databases expose rich query abilities while others are limited to akey-value storesemantics. Examples of limited distributed databases areGoogle'sBigtable, which is much more than adistributed file systemor apeer-to-peer network,[3]Amazon'sDynamo[4]andMicrosoft Azure Storage.[5] As the ability of arbitrary querying is not as important as theavailability, designers of distributed data stores have increased the latter at an expense of consistency. But the high-speed read/write access results in reduced consistency, as it is not possible to guarantee bothconsistencyand availability on a partitioned network, as stated by theCAP theorem. In peer network data stores, the user can usually reciprocate and allow other users to use their computer as a storage node as well. Information may or may not be accessible to other users depending on the design of the network. Mostpeer-to-peernetworks do not have distributed data stores in that the user's data is only available when their node is on the network. However, this distinction is somewhat blurred in a system such asBitTorrent, where it is possible for the originating node to go offline but the content to continue to be served. Still, this is only the case for individual files requested by the redistributors, as contrasted with networks such asFreenet,Winny,ShareandPerfect Darkwhere any node may be storing any part of the files on the network. Distributed data stores typically use anerror detection and correctiontechnique. Some distributed data stores (such asParchiveover NNTP) useforward error correctiontechniques to recover the original file when parts of that file are damaged or unavailable. Others try again to download that file from a different mirror.
https://en.wikipedia.org/wiki/Distributed_data_store
Skip graphsare a kind of distributed data structure based onskip lists. They were invented in 2003 by James Aspnes and Gauri Shah. A nearly identical data structure called SkipNet was independently invented by Nicholas Harvey, Michael Jones, Stefan Saroiu, Marvin Theimer and Alec Wolman, also in 2003.[1] Skip graphs have the full functionality of abalanced treein adistributed system. Skip graphs are mostly used in searchingpeer-to-peer networks. As they provide the ability toquerybykey ordering, they improve over search tools based on thehash tablefunctionality only. In contrast toskip listsand othertree data structures, they are very resilient and can tolerate a large fraction ofnodefailures. In addition, constructing, inserting, searching, and repairing a skip graph that was disturbed by failing nodes can be done by straightforward algorithms.[2] A skip graph is adistributeddata structurebased onskip listsdesigned to resemble abalanced search tree. They are one of several methods to implement adistributed hash table, which are used to locate resources stored in different locations across a network, given the name (or key) of the resource. Skip graphs offer several benefits over other distributed hash table schemes such asChord (peer-to-peer)andTapestry (DHT), including addition and deletion in expected logarithmic time, logarithmic space per resource to store indexing information, no required knowledge of the number of nodes in a set and support for complex range queries. A major distinction from Chord and Tapestry is that there is no hashing of search keys of resources, which allows related resources to be near each other in the skip graph; this property makes searches for values within a given range feasible. Another strength of skip graphs is the resilience to node failure in both random andadversarialfailure models. As withskip lists, nodes are arranged in increasing order in multiple levels; each node in leveliis contained in leveli+1 with some probability p (an adjustable parameter). Level 0 consists of onedoubly linked listcontaining all of the nodes in the set. Lists becoming increasingly sparse at higher levels, until the list is composed of just one node. Where skip graphs differ from skip lists is that each leveli≥1, will contain multiple lists; membership of a keyxin a list is defined by themembership vector⁠m(x){\displaystyle m(x)}⁠. The membership vector is defined as an infinite random word over a fixed alphabet, each list in the skip graph is identified by a finite wordwfrom the same alphabet, if that word is a prefix of⁠m(x){\displaystyle m(x)}⁠then node x is a member of the list.[2] Skip graphs support the basic operations ofsearch,insertanddelete. Skip graphs will also support the more complexrange searchoperation. The search algorithm for skip graphs is almost identical to the search algorithm for skip lists but it is modified to run in a distributed system. Searches start at the top level and traverse through the structure. At each level, the search traverses the list until the next node contains a greater key. When a greater key is found, the search drops to the next level, continuing until the key is found or it is determined that the key is not contained in the set of nodes. If the key is not contained in the set of nodes the largest value less than the search key is returned. Each node in a list has the following fields: Analysis performed by William Pugh shows that on average, a skip list and by extension a skip graph containsO(log⁡nlog⁡(1/p)){\displaystyle O\left({\frac {\log n}{\log(1/p)}}\right)}levels for a fixed value ofp.[3]Given that at most11−p{\displaystyle {\frac {1}{1-p}}}nodes are searched on average per level, the total expected number of messages sent isO(log⁡n(1−p)log⁡(1−p)){\displaystyle O\left({\frac {\log n}{(1-p)\log(1-p)}}\right)}and the expected time for the search isO(log⁡n(1−p)log⁡(1−p)){\displaystyle O\left({\frac {\log n}{(1-p)\log(1-p)}}\right)}.[2]Therefore, for a fixed value ofp, the search operation is expected to takeO(logn)time usingO(logn)messages.[2] Insertion is done in two phases and requires that a new nodeuknows some introducing nodev; the introducing node may be any other node currently in the skip graph. In the first phase the new nodeuuses the introducing nodevto search for its own key; this search is expected to fail and return the nodeswith the largest key smaller thanu. In the second phaseuinserts itself in each level until it is the only element in a list at the top level.[2]Insertion at each level is performed using standard doubly linked list operations; the left neighbor's next pointer is changed to point to the new node and the right neighbor's previous pointer is changed to point to the node. Similar to the search, the insert operation takes expectedO(logn) messages andO(logn) time. With a fixed value of p; the search operation in phase 1 is expected to takeO(logn) time and messages. In phase 2 at each levelL≥ 0,ucommunicates with an average 1/pother nodes to locates', this will requireO(1/p) time and messages leading toO(1) time and messages for each step in phase 2.[2] Nodes may be deleted in parallel at each level inO(1) time andO(logn) messages.[2]When a node wishes to leave the graph it must send messages to its immediate neighbors to rearrange their next and previous pointers.[2] The skip graph contains an average ofO(logn) levels; at each levelumust send 2 messages to complete a delete operation on a doubly linked list. As operations on each level may be done in parallel the delete operation may be finished usingO(1) time and expectedO(logn) messages. In skip graphs, fault tolerance describes the number of nodes which can be disconnected from the skip graph by failures of other nodes.[2]Two failure models have been examined; random failures and adversarial failures. In the random failure model any node may fail independently from any other node with some probability. The adversarial model assumes node failures are planned such that the worst possible failure is achieved at each step, the entire skip graph structure is known and failures are chosen to maximize node disconnection. A drawback of skip graphs is that there is norepair mechanism; currently the only way to remove and to repair a skip graph is to build a new skip graph with surviving nodes. Skip graphs are highly resistant to random failures. By maintaining information on the state of neighbors and using redundant links to avoid failed neighbors, normal operations can continue even with a large number of node failures. While the number of failed nodes is less thanO(1log⁡n){\displaystyle O\left({\frac {1}{\log n}}\right)}the skip graph can continue to function normally.[2]Simulations performed by James Aspnes show that a skip graph with 131072 nodes was able tolerate up to 60% of its nodes failing before surviving nodes were isolated.[2]While other distributed data structures may be able to achieve higher levels of resiliency they tend to be much more complex. Adversarial failure is difficult to simulate in a large network as it becomes difficult to find worst case failure patterns.[2]Theoretical analysis shows that the resilience depends based on thevertex expansion ratioof the graph, defined as follows. For a set of nodes A in the graph G, the expansion factor is the number of nodes not in A but adjacent to a node in A divided by the number of nodes in A. If skip graphs have a sufficiently large expansion ratio ofΩ(1log⁡n){\displaystyle \Omega \left({\frac {1}{\log n}}\right)}then at most⁠O(flog⁡n){\displaystyle O(f\log n)}⁠nodes may be separated, even if up to f failures are specifically targeted.[2]
https://en.wikipedia.org/wiki/Skip_graph
In mathematics,discrepancy theorydescribes the deviation of a situation from the state one would like it to be in. It is also called thetheory of irregularities of distribution. This refers to the theme ofclassicaldiscrepancy theory, namely distributing points in some space such that they are evenly distributed with respect to some (mostly geometrically defined) subsets. The discrepancy (irregularity) measures how far a given distribution deviates from an ideal one. Discrepancy theory can be described as the study of inevitable irregularities of distributions, inmeasure-theoreticandcombinatorialsettings. Just asRamsey theoryelucidates the impossibility of total disorder, discrepancy theory studies the deviations from total uniformity. A significant event in the history of discrepancy theory was the 1916 paper ofWeylon the uniform distribution of sequences in the unit interval.[1] Discrepancy theory is based on the following classic theorems: The unsolved problems relating to discrepancy theory include: Applications for discrepancy theory include:
https://en.wikipedia.org/wiki/Discrepancy_theory
Instatistics,Markov chain Monte Carlo(MCMC) is a class ofalgorithmsused to draw samples from aprobability distribution. Given a probability distribution, one can construct aMarkov chainwhose elements' distribution approximates it – that is, the Markov chain'sequilibrium distributionmatches the target distribution. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution. Markov chain Monte Carlo methods are used to study probability distributions that are too complex or too highlydimensionalto study with analytic techniques alone. Various algorithms exist for constructing such Markov chains, including theMetropolis–Hastings algorithm. Markov chain Monte Carlo methods create samples from a continuousrandom variable, withprobability densityproportional to a known function. These samples can be used to evaluate an integral over that variable, as itsexpected valueorvariance. Practically, anensembleof chains is generally developed, starting from a set of points arbitrarily chosen and sufficiently distant from each other. These chains arestochastic processesof "walkers" which move around randomly according to an algorithm that looks for places with a reasonably high contribution to the integral to move into next, assigning them higher probabilities. Random walk Monte Carlo methods are a kind of randomsimulationorMonte Carlo method. However, whereas the random samples of the integrand used in a conventionalMonte Carlo integrationarestatistically independent, those used in MCMC areautocorrelated. Correlations of samples introduces the need to use theMarkov chain central limit theoremwhen estimating the error of mean values. These algorithms createMarkov chainssuch that they have anequilibrium distributionwhich is proportional to the function given. The development of MCMC methods is deeply rooted in the early exploration ofMonte Carlo(MC) techniques in the mid-20th century, particularly in physics, marked by theMetropolis algorithmproposed byNicholas Metropolis,Arianna W. Rosenbluth,Marshall Rosenbluth,Augusta H. Teller, andEdward Tellerin 1953, designed to tackle high-dimensional integration problems using early computers.W. K. Hastingsgeneralized this algorithm in 1970 and inadvertently introduced the component-wise updating idea later known asGibbs sampling, while theoretical foundations for Gibbs sampling, such as theHammersley–Clifford theorem(published viaJulian Besag's 1974 paper), were also developing. Although the seeds of MCMC were sown earlier, including the formal naming of Gibbs sampling in image processing byStuart GemanandDonald Geman(1984) and thedata augmentationmethod by Martin A. Tanner andWing Hung Wong(1987), its "revolution" in mainstream statistics largely followed demonstrations of the universality and ease of implementation of sampling methods (especially Gibbs sampling) for complex statistical (particularlyBayesian) problems, spurred by increasing computational power and software likeBUGS. This transformation was accompanied by significant theoretical advancements, such asLuke Tierney's (1994) rigorous treatment of MCMC convergence, andJun S. Liu, Wong, andAugustine Kong's (1994, 1995) analysis of Gibbs sampler structure. Subsequent developments further expanded the MCMC toolkit, includingparticle filters(Sequential Monte Carlo) for sequential problems,Perfect samplingaiming for exact simulation (Jim Proppand David B. Wilson, 1996),RJMCMC(Peter J. Green, 1995) for handling variable-dimension models, and deeper investigations into convergence diagnostics and thecentral limit theorem. Overall, the evolution of MCMC represents a paradigm shift in statistical computation, enabling the analysis of numerous previously intractable complex models and continually expanding the scope and impact of statistics.[1] Suppose(Xₙ)is aMarkov Chainin the general state spaceX{\displaystyle {\mathcal {X}}}with specific properties. We are interested in the limiting behavior of the partial sums: asngoes to infinity. Particularly, we hope to establish theLaw of Large Numbersand theCentral Limit Theoremfor MCMC. In the following, we state some definitions and theorems necessary for the important convergence results.[2]In short, we need the existence of invariant measure and Harris recurrent to establish the Law of Large Numbers of MCMC (Ergodic Theorem). And we need aperiodicity, irreducibility and extra conditions such as reversibility to ensure the Central Limit Theorem holds in MCMC. Recall that in the discrete setting, aMarkov chainis said to beirreducibleif it is possible to reach any state from any other state in a finite number of steps with positive probability. However, in the continuous setting, point-to-point transitions have zero probability. In this case,φ-irreducibilitygeneralizesirreducibilityby using a reference measure φ on the measurable space(X,B(X)){\displaystyle ({\mathcal {X}},{\mathcal {B}}({\mathcal {X}}))}. Given a measureφ{\displaystyle \varphi }defined on(X,B(X)){\displaystyle ({\mathcal {X}},{\mathcal {B}}({\mathcal {X}}))}, the Markov chain(Xn){\displaystyle (X_{n})}with transition kernelK(x,y){\displaystyle K(x,y)}isφ-irreducibleif, for everyA∈B(X){\displaystyle A\in {\mathcal {B}}({\mathcal {X}})}withφ(A)>0{\displaystyle \varphi (A)>0}, there existsn{\displaystyle n}such thatKn(x,A)>0{\displaystyle K^{n}(x,A)>0}for allx∈X{\displaystyle x\in {\mathcal {X}}}(Equivalently,Px(τA<∞)>0{\displaystyle P_{x}(\tau _{A}<\infty )>0}, hereτA=inf{n≥1;Xn∈A}{\displaystyle \tau _{A}=\inf\{n\geq 1;X_{n}\in A\}}is the firstn{\displaystyle n}for which the chain enters the setA{\displaystyle A}). This is a more general definition forirreducibilityof aMarkov chainin non-discrete state space. In the discrete case, an irreducible Markov chain is said to beaperiodicif it has period 1. Formally, the period of a stateω∈X{\displaystyle \omega \in {\mathcal {X}}}is defined as: For the general (non-discrete) case, we define aperiodicity in terms of small sets: Aφ-irreducibleMarkov chain(Xn){\displaystyle (X_{n})}has acycle of length dif there exists a small setC{\displaystyle C}, an associated integerM{\displaystyle M}, and a probability distributionνM{\displaystyle \nu _{M}}such thatdis thegreatest common divisorof: A setC{\displaystyle C}is calledsmallif there existsm∈N∗{\displaystyle m\in \mathbb {N} ^{*}}and a nonzero measureνm{\displaystyle \nu _{m}}such that: A setA{\displaystyle A}isHarris recurrentifPx(ηA=∞)=1{\displaystyle P_{x}(\eta _{A}=\infty )=1}for allx∈A{\displaystyle x\in A}, whereηA=∑n=1∞IA(Xn){\displaystyle \eta _{A}=\sum _{n=1}^{\infty }\mathbb {I} _{A}(X_{n})}is the number of visits of the chain(Xn){\displaystyle (X_{n})}to the setA{\displaystyle A}. The chain(Xn){\displaystyle (X_{n})}is said to beHarris recurrentif there exists a measureψ{\displaystyle \psi }such that the chain isψ{\displaystyle \psi }-irreducible and every measurable setA{\displaystyle A}withψ(A)>0{\displaystyle \psi (A)>0}is Harris recurrent. A useful criterion for verifying Harris recurrence is the following: If for everyA∈B(X){\displaystyle A\in {\mathcal {B}}({\mathcal {X}})}, we havePx(τA<∞)=1{\displaystyle P_{x}(\tau _{A}<\infty )=1}for everyx∈A{\displaystyle x\in A}, thenPx(ηA=∞)=1{\displaystyle P_{x}(\eta _{A}=\infty )=1}for allx∈X{\displaystyle x\in {\mathcal {X}}}, and the chain(Xn){\displaystyle (X_{n})}is Harris recurrent. This definition is only needed when the state spaceX{\displaystyle {\mathcal {X}}}is uncountable. In the countable case, recurrence corresponds toEx[ηx]=∞{\displaystyle \mathbb {E} _{x}[\eta _{x}]=\infty }, which is equivalent toPx(τx<∞)=1{\displaystyle P_{x}(\tau _{x}<\infty )=1}for allx∈X{\displaystyle x\in {\mathcal {X}}}. Aσ{\displaystyle \sigma }-finite measureπ{\displaystyle \pi }is said to beinvariantfor the transition kernelK(⋅,⋅){\displaystyle K(\cdot ,\cdot )}(and the associated chain) if: When there exists aninvariant probability measurefor aψ-irreducible(hence recurrent) chain, the chain is said to bepositive recurrent. Recurrent chains that do not allow for a finite invariant measure are callednull recurrent. In applications of Markov Chain Monte Carlo (MCMC), a very useful criterion for Harris recurrence involves the use of bounded harmonic functions. A measurable functionh{\displaystyle h}is said to beharmonicfor the chain(Xn){\displaystyle (X_{n})}if: These functions areinvariantunder the transition kernel in the functional sense, and they help characterize Harris recurrence. For a positive Markov chain, if the only bounded harmonic functions are the constant functions, then the chain is Harris recurrent. If(Xn){\displaystyle (X_{n})}has aσ{\displaystyle \sigma }-finite invariant measureπ{\displaystyle \pi }, then the following two statements are equivalent: This theorem provides a fundamental justification for the use of Markov Chain Monte Carlo (MCMC) methods, and it serves as the counterpart of theLaw of Large Numbers(LLN) in classical Monte Carlo. An important aspect of this result is thatπ{\displaystyle \pi }does not need to be a probability measure. Therefore, there can be some type of strong stability even if the chain is null recurrent. Moreover, the Markov chain can be started from arbitrary state. Ifπ{\displaystyle \pi }is a probability measure, we can letg≡1{\displaystyle g\equiv 1}and get This is the Ergodic Theorem that we are more familiar with. There are several conditions under which theCentral Limit Theorem(CLT) holds for Markov chain Monte Carlo (MCMC) methods. One of the most commonly used is the condition ofreversibility. A stationary Markov chain(Xn){\displaystyle (X_{n})}is said to bereversibleif the distribution ofXn+1{\displaystyle X_{n+1}}givenXn+2=x{\displaystyle X_{n+2}=x}is the same as the distribution ofXn+1{\displaystyle X_{n+1}}givenXn=x{\displaystyle X_{n}=x}. This is equivalent to thedetailed balance condition, which is defined as follows: A Markov chain with transition kernelK{\displaystyle K}satisfies thedetailed balance conditionif there exists a functionf{\displaystyle f}such that: for every pair(x,y){\displaystyle (x,y)}in the state space. If(Xn){\displaystyle (X_{n})}is aperiodic, irreducible, and reversible with invariant distributionπ{\displaystyle \pi }, then: where and Even though reversibility is a restrictive assumption in theory, it is often easily satisfied in practical MCMC algorithms by introducing auxiliary variables or using symmetric proposal mechanisms. There are many other conditions that can be used to establish CLT for MCMC such as geometirc ergodicity and the discrete state space. MCMC methods produce autocorrelated samples, in contrast to standard Monte Carlo techniques that draw independent samples.Autocorrelationmeans successive draws from the Markov chain are statistically dependent, so each new sample adds less fresh information than an independent draw would. As a result, one must account for this correlation when assessing the accuracy of estimates from the chain. In particular, positive autocorrelation in the chain increases the variance of estimators and slows the convergence of sample averages toward the true expectation. The effect of correlation on estimation can be quantified through theMarkov chain central limit theorem. For a chain targeting a distribution with varianceσ2{\displaystyle \sigma ^{2}}, the variance of the sample mean afterN{\displaystyle N}steps is approximatelyσ2/Neff{\displaystyle {\sigma ^{2}}{\big /}N_{\text{eff}}}, whereNeff{\displaystyle N_{\text{eff}}}is an effective sample size smaller thanN{\displaystyle N}. Equivalently, one can express this as: whereX¯N{\displaystyle {\bar {X}}_{N}}is the sample mean andρk{\displaystyle \rho _{k}}is the autocorrelation of the chain at lagk{\displaystyle k}, defined asρk=Cov(X0,Xk)Var(X0)Var(Xk){\displaystyle \rho _{k}={\frac {\mathrm {Cov} (X_{0},X_{k})}{\sqrt {\mathrm {Var} (X_{0})\mathrm {Var} (X_{k})}}}}. The term in parentheses,1+2∑k=1∞ρk{\displaystyle 1+2\sum _{k=1}^{\infty }\rho _{k}}, is often called the integrated autocorrelation. When the chain has no autocorrelation (ρk=0{\displaystyle \rho _{k}=0}for allk≥1{\displaystyle k\geq 1}), this factor equals 1, and one recovers the usualσ2/N{\displaystyle \sigma ^{2}/N}variance for independent samples. If the chain’s samples are highly correlated, the sum of autocorrelations is large, leading to a much bigger variance forX¯N{\displaystyle {\bar {X}}_{N}}than in the independent case. The effective sample sizeNeff{\displaystyle N_{\text{eff}}}is a useful diagnostic that translates the autocorrelation in a chain into an equivalent number of independent samples. It is defined by the formula: so thatNeff{\displaystyle N_{\text{eff}}}is the number of independent draws that would yield the same estimation precision as theN{\displaystyle N}dependent draws from the Markov chain. For example, if1+2∑k=1∞ρk=5{\displaystyle 1+2\sum _{k=1}^{\infty }\rho _{k}=5}, thenNeff=N/5{\displaystyle N_{\text{eff}}=N/5}, meaning the chain of lengthN{\displaystyle N}carries information equivalent toN/5{\displaystyle N/5}independent samples. In an ideal scenario with no correlation,ρk=0{\displaystyle \rho _{k}=0}and thusNeff≈N{\displaystyle N_{\text{eff}}\approx N}. But in a poorly mixing chain with strong autocorrelation,Neff{\displaystyle N_{\text{eff}}}can be much smaller thanN{\displaystyle N}. In practice, monitoring the ESS for each parameter is a way to gauge how much correlation is present: a low ESS indicates that many more iterations may be needed to achieve a desired effective sample of independent draws. While MCMC methods were created to address multi-dimensional problems better than generic Monte Carlo algorithms, when the number of dimensions rises they too tend to suffer thecurse of dimensionality: regions of higher probability tend to stretch and get lost in an increasing volume of space that contributes little to the integral. One way to address this problem could be shortening the steps of the walker, so that it does not continuously try to exit the highest probability region, though this way the process would be highly autocorrelated and expensive (i.e. many steps would be required for an accurate result). More sophisticated methods such asHamiltonian Monte Carloand theWang and Landau algorithmuse various ways of reducing this autocorrelation, while managing to keep the process in the regions that give a higher contribution to the integral. These algorithms usually rely on a more complicated theory and are harder to implement, but they usually converge faster. We outline several general strategies such as reparameterization, adaptive proposal tuning, parameter blocking, and overrelaxation that help reduce correlation and improve sampling efficiency within the standard MCMC framework. One way to reduce autocorrelation is to reformulate or reparameterize the statistical model so that the posterior geometry leads to more efficient sampling. By changing the coordinate system or using alternative variable definitions, one can often lessen correlations. For example, inBayesian hierarchical modeling, a non-centered parameterization can be used in place of the standard (centered) formulation to avoid extreme posterior correlations between latent and higher-level parameters. This involves expressinglatent variablesin terms of independent auxiliary variables, dramatically improving mixing. Such reparameterization strategies are commonly employed in bothGibbs samplingandMetropolis–Hastings algorithmto enhance convergence and reduce autocorrelation.[3] Another approach to reducing correlation is to improve the MCMC proposal mechanism. InMetropolis–Hastings algorithm, step size tuning is critical: if the proposed steps are too small, the sampler moves slowly and produces highly correlated samples; if the steps are too large, many proposals are rejected, resulting in repeated values. Adjusting the proposal step size during an initial testing phase helps find a balance where the sampler explores the space efficiently without too many rejections. Adaptive MCMC methods modify proposal distributions based on the chain's past samples. For instance, adaptive metropolis algorithm updates the Gaussian proposal distribution using the full information accumulated from the chain so far, allowing the proposal to adapt over time.[4] Parameter blocking is a technique that reduces autocorrelation in MCMC by updating parameters jointly rather than one at a time. When parameters exhibit strong posterior correlations, one-by-one updates can lead to poor mixing and slow exploration of the target distribution. By identifying and sampling blocks of correlated parameters together, the sampler can more effectively traverse high-density regions of the posterior. Parameter blocking is commonly used in both Gibbs sampling and Metropolis–Hastings algorithms. In blocked Gibbs sampling, entire groups of variables are updated conditionally at each step.[5]In Metropolis–Hastings, multivariate proposals enable joint updates (i.e., updates of multiple parameters at once using a vector-valued proposal distribution, typically a multivariate Gaussian), though they often require careful tuning of the proposal covariance matrix.[6] Overrelaxation is a technique to reduce autocorrelation between successive samples by proposing new samples that are negatively correlated with the current state. This helps the chain explore the posterior more efficiently, especially in high-dimensional Gaussian models or when using Gibbs sampling. The basic idea is to reflect the current sample across the conditional mean, producing proposals that retain the correct stationary distribution but with reduced serial dependence. Overrelaxation is particularly effective when combined with Gaussian conditional distributions, where exact reflection or partial overrelaxation can be analytically implemented.[7] Interacting MCMC methodologies are a class ofmean-field particle methodsfor obtainingrandom samplesfrom a sequence of probability distributions with an increasing level of sampling complexity.[14]These probabilistic models include path space state models with increasing time horizon, posterior distributions w.r.t. sequence of partial observations, increasing constraint level sets for conditional distributions, decreasing temperature schedules associated with some Boltzmann–Gibbs distributions, and many others. In principle, any Markov chain Monte Carlo sampler can be turned into an interacting Markov chain Monte Carlo sampler. These interacting Markov chain Monte Carlo samplers can be interpreted as a way to run in parallel a sequence of Markov chain Monte Carlo samplers. For instance, interactingsimulated annealingalgorithms are based on independent Metropolis–Hastings moves interacting sequentially with a selection-resampling type mechanism. In contrast to traditional Markov chain Monte Carlo methods, the precision parameter of this class of interacting Markov chain Monte Carlo samplers isonlyrelated to the number of interacting Markov chain Monte Carlo samplers. These advanced particle methodologies belong to the class of Feynman–Kac particle models,[15][16]also called Sequential Monte Carlo orparticle filtermethods inBayesian inferenceandsignal processingcommunities.[17]Interacting Markov chain Monte Carlo methods can also be interpreted as a mutation-selectiongenetic particle algorithmwith Markov chain Monte Carlo mutations. Thequasi-Monte Carlo methodis an analog to the normal Monte Carlo method that useslow-discrepancy sequencesinstead of random numbers.[18][19]It yields an integration error that decays faster than that of true random sampling, as quantified by theKoksma–Hlawka inequality. Empirically it allows the reduction of both estimation error and convergence time by an order of magnitude.[18]Markov chain quasi-Monte Carlo methods[20][21]such as the Array–RQMC method combine randomized quasi–Monte Carlo and Markov chain simulation by simulatingn{\displaystyle n}chains simultaneously in a way that better approximates the true distribution of the chain than with ordinary MCMC.[22]In empirical experiments, the variance of the average of a function of the state sometimes converges at rateO(n−2){\displaystyle O(n^{-2})}or even faster, instead of theO(n−1){\displaystyle O(n^{-1})}Monte Carlo rate.[23] MCMC methods are primarily used for calculatingnumerical approximationsofmulti-dimensional integrals, for example inBayesian statistics,computational physics,[24]computational biology[25]andcomputational linguistics.[26][27] In Bayesian statistics, Markov chain Monte Carlo methods are typically used to calculatemomentsandcredible intervalsofposterior probabilitydistributions. The use of MCMC methods makes it possible to compute largehierarchical modelsthat require integrations over hundreds to thousands of unknown parameters.[28] Many contemporary research problems in statistical physics can be addressed by approximate solutions using Monte Carlo simulation, which provides valuable insights into the properties of complex systems. Monte Carlo methods are fundamental in computational physics, physical chemistry, and related disciplines, with broad applications including medical physics, where they are employed to model radiation transport for radiation dosimetry calculations.[29][30]Instead of exhaustively analyzing all possible system states, the Monte Carlo method randomly examines a subset of them to form a representative sample, and yields accurate approximations of the system’s characteristic properties. As the number of sampled states increases, the error can be further reduced to a lower level. Langevin Dynamics are typically used in complex distribution sampling and generative modeling,[31][32]via an MCMC procedure. Specifically, given the probability density functionp(x){\displaystyle p(x)}, we use its log gradient∇xlog⁡p(x){\displaystyle \nabla _{x}\log p(x)}as the score function and start from a prior distributionx0∼p0{\displaystyle x_{0}\sim p_{0}}. Then, a chain is built by fori=0,…,K{\displaystyle i=0,\dots ,K}. Whenϵ→0{\displaystyle \epsilon \rightarrow 0}andK→∞{\displaystyle K\rightarrow \infty },xK{\displaystyle x_{K}}converges to a sample from the target distributionp(x){\displaystyle p(x)}. For some complex distribution, if we know its probability density function but find it difficult to directly sample from it, we can apply Langevin Dynamics as an alternate. However, in most cases, especially generative modeling, usually we do not know the exact probability density function of the target distribution we wish to sample from, neither the score function∇xlog⁡p(x){\displaystyle \nabla _{x}\log p(x)}. In this case, score matching methods[33][34][35]provide feasible solutions, minimizing theFisher information metricbetween a parameterized score-based modelsθ(x){\displaystyle s_{\theta }(x)}and the score function without knowing the ground-truth data score. The score function can be estimated on a training dataset bystochastic gradient descent. In real cases, however, the training data only takes a small region of the target distribution, and the estimated score functions are inaccurate in other low density regions with fewer available data examples. To overcome this challenge, denoising score matching[32][34][36]methods purturb the available data examples with noise of different scales, which can improve the coverage of low density regions, and use them as the training dataset for the score-base model. Note that the choice of noise scales is tricky, as too large noise will corrupt the original data, while too small noise will not populate the original data to those low density regions. Thus, carefully crafted noise schedules[32][35][36]are applied for higher quality generation. Usually it is not hard to construct a Markov chain with the desired properties. The more difficult problem is to determine (1) when to start collecting statistics and (2) how many steps are needed to converge to the stationary distribution within an acceptable error.[37][38]Fortunately, there are a variety of practical diagnostics to empirically assess convergence. Formally, letπ{\displaystyle \pi }denote the stationary distribution andPt(x,⋅){\displaystyle P^{t}(x,\cdot )}the distribution of the Markov chain aftert{\displaystyle t}steps starting from statex{\displaystyle x}. Theoretically, convergence can be quantified by measuring thetotal variation distance: A chain is said to mix rapidly ifdTV(Pt(x,⋅),π)≤ϵ{\displaystyle d_{\text{TV}}(P^{t}(x,\cdot ),\pi )\leq \epsilon }for allx∈X{\displaystyle x\in {\mathcal {X}}}within a small number of stepst{\displaystyle t}under a pre-defined toleranceϵ>0{\displaystyle \epsilon >0}. In other words, the stationary distribution is reached quickly starting from an arbitrary position, and the minimum sucht{\displaystyle t}is known as themixing time. In practice, however, the total variation distance is generally intractable to compute, especially in high-dimensional problems or when the stationary distribution is only known up to a normalizing constant (as in most Bayesian applications). TheGelman-Rubin statistic, also known as thepotential scale reduction factor (PSRF), evaluates MCMC convergence by sampling multiple independent Markov chains and comparing within-chain and between-chain variances.[39]If all chains have converged to the same stationary distribution, the between-chain and within-chain variances should be similar, and thus the PSRF must approach 1. In practice, a value of<1.1{\displaystyle <1.1}is often taken as evidence of convergence. Higher values suggest that the chains are still exploring different parts of the target distribution. The Geweke diagnostic examines whether the distribution of samples in the early part of the Markov chain is statistically indistinguishable from the distribution in a later part.[40]Given a sequence of correlated MCMC samples{X1,X2,…,Xn}{\displaystyle \{X_{1},X_{2},\dots ,X_{n}\}}, the diagnostic splits the chain into an early segment consisting of the firstnA{\displaystyle n_{A}}samples, typically chosen asnA=0.1n{\displaystyle n_{A}=0.1n}(i.e., the first 10% of the chain), and a late segment consisting of the lastnB{\displaystyle n_{B}}samples, typically chosen asnB=0.5n{\displaystyle n_{B}=0.5n}(i.e., the last 50% of the chain) Denote the sample means of these segments as: Since MCMC samples are autocorrelated, a simple comparison of sample means is insufficient. Instead, the difference in means is standardized using an estimator of the spectral density at zero frequency, which accounts for the long-range dependencies in the chain. The test statistic is computed as: whereS^(0){\displaystyle {\hat {S}}(0)}is an estimate of the long-run variance (i.e., the spectral density at frequency zero), commonly estimated usingNewey-West estimatorsor batch means. Under the null hypothesis of convergence, the statisticZ{\displaystyle Z}follows an approximately standard normal distributionZ∼N(0,1){\displaystyle Z\sim {\mathcal {N}}(0,1)}. If|Z|>1.96{\displaystyle |Z|>1.96}, the null hypothesis is rejected at the 5% significance level, suggesting that the chain has not yet reached stationarity. The Heidelberger-Welch diagnostic is grounded inspectral analysisandBrownian motion theory, and is particularly useful in the early stages of simulation to determine appropriate burn-in and stopping time.[41][42]The diagnostic consists of two components, astationarity testthat assesses whether the Markov chain has reached a steady-state, and ahalf-width testthat determines whether the estimated expectation is within a user-specified precision. Let{Xt}t=1n{\displaystyle \{X_{t}\}_{t=1}^{n}}be the output of an MCMC simulation for a scalar functiong(Xt){\displaystyle g(X_{t})}, andg1,g2,…,gn{\displaystyle g_{1},g_{2},\dots ,g_{n}}the evaluations of the functiong{\displaystyle g}over the chain. Define the standardized cumulative sum process: whereg¯n=1n∑i=1ngi{\displaystyle {\bar {g}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}g_{i}}is the sample mean andS^(0){\displaystyle {\hat {S}}(0)}is an estimate of the spectral density at frequency zero. Under the null hypothesis of convergence, the processBn(t){\displaystyle B_{n}(t)}converges in distribution to aBrownian bridge. The followingCramér-von Mises statisticis used to test for stationarity: This statistic is compared against known critical values from the Brownian bridge distribution. If the null hypothesis is rejected, the first 10% of the samples are discarded and the test can be repeated on the remaining chain until either stationarity is accepted or 50% of the chain is discarded. Once stationarity is accepted, the second part of the diagnostic checks whether the Monte Carlo estimator is accurate enough for practical use. Assuming the central limit theorem holds, the confidence interval for the meanEπ[g(X)]{\displaystyle \mathbb {E} _{\pi }[g(X)]}is given by whereσ^2{\displaystyle {\hat {\sigma }}^{2}}is an estimate of the variance ofg(X){\displaystyle g(X)},tα/2,ν{\displaystyle t_{\alpha /2,\nu }}is theStudent'st{\displaystyle t}critical value at confidence level1−α{\displaystyle 1-\alpha }and degrees of freedomν{\displaystyle \nu },n{\displaystyle n}is the number of samples used. Thehalf-widthof this interval is defined as If the half-width is smaller than a user-defined tolerance (e.g., 0.05), the chain is considered long enough to estimate the expectation reliably. Otherwise, the simulation should be extended. The Raftery-Lewis diagnostic is specifically designed to assess how many iterations are needed to estimate quantiles or tail probabilities of the target distribution with a desired accuracy and confidence.[43]Unlike Gelman-Rubin or Geweke diagnostics, which are based on assessing convergence to the entire distribution, the Raftery-Lewis diagnostic is goal-oriented as it provides estimates for the number of samples required to estimate a specific quantile of interest within a desired margin of error. Letq{\displaystyle q}denote the desired quantile (e.g., 0.025) of a real-valued functiong(X){\displaystyle g(X)}: in other words, the goal is to findu{\displaystyle u}such thatP(g(X)≤u)=q{\displaystyle P(g(X)\leq u)=q}. Suppose we wish to estimate this quantile such that the estimate falls within marginε{\displaystyle \varepsilon }of the true value with probability1−α{\displaystyle 1-\alpha }. That is, we want The diagnostic proceeds by converting the output of the MCMC chain into a binary sequence: whereI(⋅){\displaystyle I(\cdot )}is the indicator function. The sequence{Wn}{\displaystyle \{W_{n}\}}is treated as a realization from a two-state Markov chain. While this may not be strictly true, it is often a good approximation in practice. From the empirical transitions in the binary sequence, the Raftery-Lewis method estimates: whereΦ−1(⋅){\displaystyle \Phi ^{-1}(\cdot )}is the standard normal quantile function. Several software programs provide MCMC sampling capabilities, for example:
https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo
Innumerical analysis, thequasi-Monte Carlo methodis a method fornumerical integrationand solving some other problems usinglow-discrepancy sequences(also called quasi-random sequences or sub-random sequences) to achievevariance reduction. This is in contrast to the regularMonte Carlo methodorMonte Carlo integration, which are based on sequences ofpseudorandomnumbers. Monte Carlo and quasi-Monte Carlo methods are stated in a similar way. The problem is to approximate the integral of a functionfas the average of the function evaluated at a set of pointsx1, ...,xN: Since we are integrating over thes-dimensionalunit cube, eachxiis a vector ofselements. The difference between quasi-Monte Carlo and Monte Carlo is the way thexiare chosen. Quasi-Monte Carlo uses a low-discrepancy sequence such as theHalton sequence, theSobol sequence, or the Faure sequence, whereas Monte Carlo uses a pseudorandom sequence. The advantage of using low-discrepancy sequences is a fasterrate of convergence. Quasi-Monte Carlo has a rate of convergence close to O(1/N), whereas the rate for the Monte Carlo method is O(N−0.5).[1] The Quasi-Monte Carlo method recently became popular in the area ofmathematical financeorcomputational finance.[1]In these areas, high-dimensional numerical integrals, where the integral should be evaluated within a threshold ε, occur frequently. Hence, the Monte Carlo method and the quasi-Monte Carlo method are beneficial in these situations. The approximation error of the quasi-Monte Carlo method is bounded by a term proportional to the discrepancy of the setx1, ...,xN. Specifically, theKoksma–Hlawka inequalitystates that the error is bounded by whereV(f) is the Hardy–Krause variation of the functionf(see Morokoff and Caflisch (1995)[2]for the detailed definitions).DNis the so-called star discrepancy of the set (x1,...,xN) and is defined as whereQis a rectangular solid in [0,1]swith sides parallel to the coordinate axes.[2]The inequality|ε|≤V(f)DN{\displaystyle |\varepsilon |\leq V(f)D_{N}}can be used to show that the error of the approximation by the quasi-Monte Carlo method isO((log⁡N)sN){\displaystyle O\left({\frac {(\log N)^{s}}{N}}\right)}, whereas the Monte Carlo method has a probabilistic error ofO(1N){\displaystyle O\left({\frac {1}{\sqrt {N}}}\right)}. Thus, for sufficiently largeN{\displaystyle N}, quasi-Monte Carlo will always outperform random Monte Carlo. However,log⁡(N)s{\displaystyle \log(N)^{s}}grows exponentially quickly with the dimension, meaning a poorly-chosen sequence can be much worse than Monte Carlo in high dimensions. In practice, it is almost always possible to select an appropriate low-discrepancy sequence, or apply an appropriate transformation to the integrand, to ensure that quasi-Monte Carlo performs at least as well as Monte Carlo (and often much better).[1] For one-dimensional integration, quadrature methods such as thetrapezoidal rule,Simpson's rule, orNewton–Cotes formulasare known to be efficient if the function is smooth. These approaches can be also used for multidimensional integrations by repeating the one-dimensional integrals over multiple dimensions. However, the number of function evaluations grows exponentially ass, the number of dimensions, increases. Hence, a method that can overcome thiscurse of dimensionalityshould be used for multidimensional integrations. The standard Monte Carlo method is frequently used when the quadrature methods are difficult or expensive to implement.[2]Monte Carlo and quasi-Monte Carlo methods are accurate and relatively fast when the dimension is high, up to 300 or higher.[3] Morokoff and Caflisch[2]studied the performance of Monte Carlo and quasi-Monte Carlo methods for integration. In the paper, Halton, Sobol, and Faure sequences for quasi-Monte Carlo are compared with the standard Monte Carlo method using pseudorandom sequences. They found that the Halton sequence performs best for dimensions up to around 6; the Sobol sequence performs best for higher dimensions; and the Faure sequence, while outperformed by the other two, still performs better than a pseudorandom sequence. However, Morokoff and Caflisch[2]gave examples where the advantage of the quasi-Monte Carlo is less than expected theoretically. Still, in the examples studied by Morokoff and Caflisch, the quasi-Monte Carlo method did yield a more accurate result than the Monte Carlo method with the same number of points. Morokoff and Caflisch remark that the advantage of the quasi-Monte Carlo method is greater if the integrand is smooth, and the number of dimensionssof the integral is small. Lemieux mentioned the drawbacks of quasi-Monte Carlo:[4] In order to overcome some of these difficulties, we can use a randomized quasi-Monte Carlo method. Since the low discrepancy sequence are not random, but deterministic, quasi-Monte Carlo method can be seen as adeterministic algorithmor derandomized algorithm. In this case, we only have the bound (e.g.,ε≤V(f)DN) for error, and the error is hard to estimate. In order to recover our ability to analyze and estimate the variance, we can randomize the method (seerandomizationfor the general idea). The resulting method is called the randomized quasi-Monte Carlo method and can be also viewed as a variance reduction technique for the standard Monte Carlo method.[5]Among several methods, the simplest transformation procedure is through random shifting. Let {x1,...,xN} be the point set from the low discrepancy sequence. We samples-dimensional random vectorUand mix it with {x1, ...,xN}. In detail, for eachxj, create and use the sequence(yj){\displaystyle (y_{j})}instead of(xj){\displaystyle (x_{j})}. If we haveRreplications for Monte Carlo, sample s-dimensional random vector U for each replication. Randomization allows to give an estimate of the variance while still using quasi-random sequences. Compared to pure quasi Monte-Carlo, the number of samples of the quasi random sequence will be divided byRfor an equivalent computational cost, which reduces the theoretical convergence rate. Compared to standard Monte-Carlo, the variance and the computation speed are slightly better from the experimental results in Tuffin (2008)[6]
https://en.wikipedia.org/wiki/Quasi-Monte_Carlo_method
Sparse gridsare numerical techniques to represent, integrate or interpolate highdimensionalfunctions. They were originally developed by theRussianmathematicianSergey A. Smolyak, a student ofLazar Lyusternik, and are based on a sparse tensor product construction. Computer algorithms for efficient implementations of such grids were later developed byMichael GriebelandChristoph Zenger. The standard way of representing multidimensional functions are tensor or full grids. The number of basis functions or nodes (grid points) that have to be stored and processeddepend exponentiallyon the number of dimensions. Thecurse of dimensionalityis expressed in the order of the integration error that is made by a quadrature of levell{\displaystyle l}, withNl{\displaystyle N_{l}}points. The function has regularityr{\displaystyle r}, i.e. isr{\displaystyle r}times differentiable. The number of dimensions isd{\displaystyle d}. |El|=O(Nl−rd){\displaystyle |E_{l}|=O(N_{l}^{-{\frac {r}{d}}})} Smolyak found a computationally more efficient method of integrating multidimensional functions based on a univariate quadrature ruleQ(1){\displaystyle Q^{(1)}}. Thed{\displaystyle d}-dimensional Smolyak integralQ(d){\displaystyle Q^{(d)}}of a functionf{\displaystyle f}can be written as a recursion formula with thetensor product. Ql(d)f=(∑i=1l(Qi(1)−Qi−1(1))⊗Ql−i+1(d−1))f{\displaystyle Q_{l}^{(d)}f=\left(\sum _{i=1}^{l}\left(Q_{i}^{(1)}-Q_{i-1}^{(1)}\right)\otimes Q_{l-i+1}^{(d-1)}\right)f} The index toQ{\displaystyle Q}is the level of the discretization. If a 1-dimension integration on leveli{\displaystyle i}is computed by the evaluation ofO(2i){\displaystyle O(2^{i})}points, the error estimate for a function of regularityr{\displaystyle r}will be|El|=O(Nl−r(log⁡Nl)(d−1)(r+1)){\displaystyle |E_{l}|=O\left(N_{l}^{-r}\left(\log N_{l}\right)^{(d-1)(r+1)}\right)} Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sparse_grid
Insurvey methodology, one-dimensionalsystematic samplingis astatistical methodinvolving the selection of elements from an orderedsampling frame. The most common form of systematic sampling is anequiprobabilitymethod.[1]This applies in particular when the sampled units are individuals, households or corporations. When a geographic area is sampled for aspatial analysis, bi-dimensional systematic sampling on anarea sampling framecan be applied.[2] In one-dimensional systematic sampling, progression through the list is treated circularly, with a return to the top once the list ends. The sampling starts by selecting an element from the list at random and then everykthelement in the frame is selected, wherek, is the sampling interval (sometimes known as theskip): this is calculated as:[3] wherenis the sample size, andNis the population size. Using this procedure each element in thepopulationhas a known and equal probability of selection (also known asepsem). This makes systematic sampling functionally similar tosimple random sampling(SRS). However, it is not the same as SRS because not every possible sample of a certain size has an equal chance of being chosen (e.g. samples with at least two elements adjacent to each other will never be chosen by systematic sampling). It is, however, much more efficient (if the variance within a systematic sample is more than the variance of the population).[citation needed] Systematic sampling is to be applied only if the given population is logically homogeneous, because systematic sample units are uniformly distributed over the population. The researcher must ensure that the chosen sampling interval does not hide a pattern. Any pattern would threaten randomness. Example: Suppose a supermarket wants to study buying habits of their customers, then using systematic sampling they can choose every 10th or 15th customer entering the supermarket and conduct the study on this sample. This is random sampling with a system. From the sampling frame, a starting point is chosen at random, and choices thereafter are at regular intervals. For example, suppose you want to sample 8 houses from a street of 120 houses. 120/8=15, so every 15th house is chosen after a random starting point between 1 and 15. If the random starting point is 11, then the houses selected are 11, 26, 41, 56, 71, 86, 101, and 116. As an aside, if every 15th house was a "corner house" then this corner pattern could destroy the randomness of the sample. If, more frequently, the population is not evenly divisible (suppose you want to sample 8 houses out of 125, where 125/8=15.625), should you take every 15th house or every 16th house? If you take every 16th house, 8*16=128, there is a risk that the last house chosen does not exist. On the other hand, if you take every 15th house, 8*15=120, so the last five houses will never be selected. The random starting point should instead be selected as a non-integer between 0 and 15.625 (inclusive on one endpoint only) to ensure that every house has an equal chance of being selected; the interval should now be non-integral (15.625); and each non-integer selected should be rounded up to the next integer. If the random starting point is 3.6, then the houses selected are 4, 20, 35, 50, 66, 82, 98, and 113, where there are 3 cyclic intervals of 15 and 4 intervals of 16. To illustrate the danger of systematic skip concealing a pattern, suppose we were to sample a planned neighborhood where each street has ten houses on each block. This places houses No. 1, 10, 11, 20, 21, 30... on block corners; corner blocks may be less valuable, since more of their area is taken up by street front etc. that is unavailable for building purposes. If we then sample every 10th household, our sample will either be made uponlyof corner houses (if we start at 1 or 10) or havenocorner houses (any other start); either way, it will not be representative. Systematic sampling may also be used with non-equal selection probabilities. In this case, rather than simply counting through elements of the population and selecting everykthunit, we allocate each element a space along anumber lineaccording to its selection probability. We then generate a random start from a uniform distribution between 0 and 1, and move along the number line in steps of 1. Example: We have a population of 5 units (A to E). We want to give unit A a 20% probability of selection, unit B a 40% probability, and so on up to unit E (100%). Assuming we maintain alphabetical order, we allocate each unit to the following interval: If our random start was 0.156, we would first select the unit whose interval contains this number (i.e. A). Next, we would select the interval containing 1.156 (element C), then 2.156 (element E). If instead our random start was 0.350, we would select from points 0.350 (B), 1.350 (D), and 2.350 (E).
https://en.wikipedia.org/wiki/Systematic_sampling
Minimax(sometimesMinmax,MM[1]orsaddle point[2]) is a decision rule used inartificial intelligence,decision theory,game theory,statistics, andphilosophyforminimizingthe possiblelossfor aworst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-playerzero-sumgame theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty. Themaximin valueis the highest value that the player can be sure to get without knowing the actions of the other players; equivalently, it is the lowest value the other players can force the player to receive when they know the player's action. Its formal definition is:[3] Where: Calculating the maximin value of a player is done in a worst-case approach: for each possible action of the player, we check all possible actions of the other players and determine the worst possible combination of actions – the one that gives playerithe smallest value. Then, we determine which action playerican take in order to make sure that this smallest value is the highest possible. For example, consider the following game for two players, where the first player ("row player") may choose any of three moves, labelledT,M, orB, and the second player ("column player") may choose either of two moves,LorR. The result of the combination of both moves is expressed in a payoff table: (where the first number in each of the cell is the pay-out of the row player and the second number is the pay-out of the column player). For the sake of example, we consider onlypure strategies. Check each player in turn: If both players play their respective maximin strategies(T,L){\displaystyle (T,L)}, the payoff vector is(3,1){\displaystyle (3,1)}. Theminimax valueof a player is the smallest value that the other players can force the player to receive, without knowing the player's actions; equivalently, it is the largest value the player can be sure to get when theyknowthe actions of the other players. Its formal definition is:[3] The definition is very similar to that of the maximin value – only the order of the maximum and minimum operators is inverse. In the above example: For every playeri, the maximin is at most the minimax: Intuitively, in maximin the maximization comes after the minimization, so playeritries to maximize their value before knowing what the others will do; in minimax the maximization comes before the minimization, so playeriis in a much better position – they maximize their value knowing what the others did. Another way to understand thenotationis by reading from right to left: When we write the initial set of outcomesvi(ai,a−i){\displaystyle \ v_{i}(a_{i},a_{-i})\ }depends on bothai{\displaystyle \ {a_{i}}\ }anda−i.{\displaystyle \ {a_{-i}}\ .}We firstmarginalize awayai{\displaystyle {a_{i}}}fromvi(ai,a−i){\displaystyle v_{i}(a_{i},a_{-i})}, by maximizing overai{\displaystyle \ {a_{i}}\ }(for every possible value ofa−i{\displaystyle {a_{-i}}}) to yield a set of marginal outcomesvi′(a−i),{\displaystyle \ v'_{i}(a_{-i})\,,}which depends only ona−i.{\displaystyle \ {a_{-i}}\ .}We then minimize overa−i{\displaystyle \ {a_{-i}}\ }over these outcomes. (Conversely for maximin.) Although it is always the case thatvrow_≤vrow¯{\displaystyle \ {\underline {v_{row}}}\leq {\overline {v_{row}}}\ }andvcol_≤vcol¯,{\displaystyle \ {\underline {v_{col}}}\leq {\overline {v_{col}}}\,,}the payoff vector resulting from both players playing their minimax strategies,(2,−20){\displaystyle \ (2,-20)\ }in the case of(T,R){\displaystyle \ (T,R)\ }or(−10,1){\displaystyle (-10,1)}in the case of(M,R),{\displaystyle \ (M,R)\,,}cannot similarly be ranked against the payoff vector(3,1){\displaystyle \ (3,1)\ }resulting from both players playing their maximin strategy. In two-playerzero-sum games, the minimax solution is the same as theNash equilibrium. In the context of zero-sum games, theminimax theoremis equivalent to:[4][failed verification] For every two-personzero-sumgame with finitely many strategies, there exists a valueVand a mixed strategy for each player, such that Equivalently, Player 1's strategy guarantees them a payoff ofVregardless of Player 2's strategy, and similarly Player 2 can guarantee themselves a payoff of −V. The nameminimaxarises because each player minimizes the maximum payoff possible for the other – since the game is zero-sum, they also minimize their own maximum loss (i.e., maximize their minimum payoff). See alsoexample of a game without a value. The following example of a zero-sum game, whereAandBmake simultaneous moves, illustratesmaximinsolutions. Suppose each player has three choices and consider thepayoff matrixforAdisplayed on the table ("Payoff matrix for player A"). Assume the payoff matrix forBis the same matrix with the signs reversed (i.e., if the choices are A1 and B1 thenBpays 3 toA). Then, the maximin choice forAis A2 since the worst possible result is then having to pay 1, while the simple maximin choice forBis B2 since the worst possible result is then no payment. However, this solution is not stable, since ifBbelievesAwill choose A2 thenBwill choose B1 to gain 1; then ifAbelievesBwill choose B1 thenAwill choose A1 to gain 3; and thenBwill choose B2; and eventually both players will realize the difficulty of making a choice. So a more stable strategy is needed. Some choices aredominatedby others and can be eliminated:Awill not choose A3 since either A1 or A2 will produce a better result, no matter whatBchooses;Bwill not choose B3 since some mixtures of B1 and B2 will produce a better result, no matter whatAchooses. PlayerAcan avoid having to make an expected payment of more than⁠1/3⁠by choosing A1 with probability⁠1/6⁠and A2 with probability⁠5/6⁠:The expected payoff forAwould be3 ×⁠1/6⁠− 1 ×⁠5/6⁠=⁠−+1/3⁠in caseBchose B1 and−2 ×⁠1/6⁠+ 0 ×⁠5/6⁠=⁠−+1/3⁠in caseBchose B2. Similarly,Bcan ensure an expected gain of at least⁠1/3⁠, no matter whatAchooses, by using a randomized strategy of choosing B1 with probability⁠1/3⁠and B2 with probability⁠2/3⁠. Thesemixedminimax strategies cannot be improved and are now stable. Frequently, in game theory,maximinis distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In azero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain. "Maximin" is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own minimum payoff. In non-zero-sum games, this is not generally the same as minimizing the opponent's maximum gain, nor the same as theNash equilibriumstrategy. The minimax values are very important in the theory ofrepeated games. One of the central theorems in this theory, thefolk theorem, relies on the minimax values. Incombinatorial game theory, there is a minimax algorithm for game solutions. Asimpleversion of the minimaxalgorithm, stated below, deals with games such astic-tac-toe, where each player can win, lose, or draw. If player Acanwin in one move, their best move is that winning move. If player B knows that one move will lead to the situation where player Acanwin in one move, while another move will lead to the situation where player A can, at best, draw, then player B's best move is the one leading to a draw. Late in the game, it's easy to see what the "best" move is. The minimax algorithm helps find the best move, by working backwards from the end of the game. At each step it assumes that player A is trying tomaximizethe chances of A winning, while on the next turn player B is trying tominimizethe chances of A winning (i.e., to maximize B's own chances of winning). Aminimax algorithm[5]is a recursivealgorithmfor choosing the next move in an n-playergame, usually a two-player game. A value is associated with each position or state of the game. This value is computed by means of aposition evaluation functionand it indicates how good it would be for a player to reach that position. The player then makes the move that maximizes the minimum value of the position resulting from the opponent's possible following moves. If it isA's turn to move,Agives a value to each of their legal moves. A possible allocation method consists in assigning a certain win forAas +1 and forBas −1. This leads tocombinatorial game theoryas developed byJohn H. Conway. An alternative is using a rule that if the result of a move is an immediate win forA, it is assigned positive infinity and if it is an immediate win forB, negative infinity. The value toAof any other move is the maximum of the values resulting from each ofB's possible replies. For this reason,Ais called themaximizing playerandBis called theminimizing player, hence the nameminimax algorithm. The above algorithm will assign a value of positive or negative infinity to any position since the value of every position will be the value of some final winning or losing position. Often this is generally only possible at the very end of complicated games such aschessorgo, since it is not computationally feasible to look ahead as far as the completion of the game, except towards the end, and instead, positions are given finite values as estimates of the degree of belief that they will lead to a win for one player or another. This can be extended if we can supply aheuristicevaluation function which gives values to non-final game states without considering all possible following complete sequences. We can then limit the minimax algorithm to look only at a certain number of moves ahead. This number is called the "look-ahead", measured in "plies". For example, the chess computerDeep Blue(the first one to beat a reigning world champion,Garry Kasparovat that time) looked ahead at least 12 plies, then applied a heuristic evaluation function.[6] The algorithm can be thought of as exploring thenodesof agame tree. Theeffectivebranching factorof the tree is the average number ofchildrenof each node (i.e., the average number of legal moves in a position). The number of nodes to be explored usuallyincreases exponentiallywith the number of plies (it is less than exponential if evaluatingforced movesor repeated positions). The number of nodes to be explored for the analysis of a game is therefore approximately the branching factor raised to the power of the number of plies. It is thereforeimpracticalto completely analyze games such as chess using the minimax algorithm. The performance of the naïve minimax algorithm may be improved dramatically, without affecting the result, by the use ofalpha–beta pruning. Other heuristic pruning methods can also be used, but not all of them are guaranteed to give the same result as the unpruned search. A naïve minimax algorithm may be trivially modified to additionally return an entirePrincipal Variationalong with a minimax score. Thepseudocodefor the depth-limited minimax algorithm is given below. The minimax function returns a heuristic value forleaf nodes(terminal nodes and nodes at the maximum search depth). Non-leaf nodes inherit their value from a descendant leaf node. The heuristic value is a score measuring the favorability of the node for the maximizing player. Hence nodes resulting in a favorable outcome, such as a win, for the maximizing player have higher scores than nodes more favorable for the minimizing player. The heuristic value for terminal (game ending) leaf nodes are scores corresponding to win, loss, or draw, for the maximizing player. For non terminal leaf nodes at the maximum search depth, an evaluation function estimates a heuristic value for the node. The quality of this estimate and the search depth determine the quality and accuracy of the final minimax result. Minimax treats the two players (the maximizing player and the minimizing player) separately in its code. Based on the observation thatmax(a,b)=−min(−a,−b),{\displaystyle \ \max(a,b)=-\min(-a,-b)\ ,}minimax may often be simplified into thenegamaxalgorithm. Suppose the game being played only has a maximum of two possible moves per player each turn. The algorithm generates thetreeon the right, where the circles represent the moves of the player running the algorithm (maximizing player), and squares represent the moves of the opponent (minimizing player). Because of the limitation of computation resources, as explained above, the tree is limited to alook-aheadof 4 moves. The algorithm evaluates eachleaf nodeusing a heuristic evaluation function, obtaining the values shown. The moves where themaximizing playerwins are assigned with positive infinity, while the moves that lead to a win of theminimizing playerare assigned with negative infinity. At level 3, the algorithm will choose, for each node, thesmallestof thechild nodevalues, and assign it to that same node (e.g. the node on the left will choose the minimum between "10" and "+∞", therefore assigning the value "10" to itself). The next step, in level 2, consists of choosing for each node thelargestof thechild nodevalues. Once again, the values are assigned to eachparent node. The algorithm continues evaluating the maximum and minimum values of the child nodes alternately until it reaches theroot node, where it chooses the move with the largest value (represented in the figure with a blue arrow). This is the move that the player should make in order tominimizethemaximumpossibleloss. Minimax theory has been extended to decisions where there is no other player, but where the consequences of decisions depend on unknown facts. For example, deciding to prospect for minerals entails a cost, which will be wasted if the minerals are not present, but will bring major rewards if they are. One approach is to treat this as a game againstnature(seemove by nature), and using a similar mindset asMurphy's laworresistentialism, take an approach which minimizes the maximum expected loss, using the same techniques as in the two-person zero-sum games. In addition,expectiminimax treeshave been developed, for two-player games in which chance (for example, dice) is a factor. In classical statisticaldecision theory, we have anestimatorδ{\displaystyle \ \delta \ }that is used to estimate aparameterθ∈Θ.{\displaystyle \ \theta \in \Theta \ .}We also assume arisk functionR(θ,δ).{\displaystyle \ R(\theta ,\delta )\ .}usually specified as the integral of aloss function. In this framework,δ~{\displaystyle \ {\tilde {\delta }}\ }is calledminimaxif it satisfies An alternative criterion in the decision theoretic framework is theBayes estimatorin the presence of aprior distributionΠ.{\displaystyle \Pi \ .}An estimator is Bayes if it minimizes theaveragerisk A key feature of minimax decision making is being non-probabilistic: in contrast to decisions usingexpected valueorexpected utility, it makes no assumptions about the probabilities of various outcomes, justscenario analysisof what the possible outcomes are. It is thusrobustto changes in the assumptions, in contrast to these other decision techniques. Various extensions of this non-probabilistic approach exist, notablyminimax regretandInfo-gap decision theory. Further, minimax only requiresordinal measurement(that outcomes be compared and ranked), notintervalmeasurements (that outcomes include "how much better or worse"), and returns ordinal data, using only the modeled outcomes: the conclusion of a minimax analysis is: "this strategy is minimax, as the worst case is (outcome), which is less bad than any other strategy". Compare to expected value analysis, whose conclusion is of the form: "This strategy yieldsℰ(X) =n."Minimax thus can be used on ordinal data, and can be more transparent. The concept of "lesser evil" voting (LEV) can be seen as a form of theminimaxstrategy where voters, when faced with two or more candidates, choose the one they perceive as the least harmful or the "lesser evil." To do so, "voting should not be viewed as a form of personal self-expression or moral judgement directed in retaliation towards major party candidates who fail to reflect our values, or of a corrupt system designed to limit choices to those acceptable to corporate elites," but rather as an opportunity to reduce harm or loss.[7] In philosophy, the term "maximin" is often used in the context ofJohn Rawls'sA Theory of Justice,where he refers to it in the context of TheDifference Principle.[8]Rawls defined this principle as the rule which states that social and economic inequalities should be arranged so that "they are to be of the greatest benefit to the least-advantaged members of society".[9][10]
https://en.wikipedia.org/wiki/Minimax_algorithm
Alpha–beta pruningis asearch algorithmthat seeks to decrease the number of nodes that are evaluated by theminimax algorithmin itssearch tree. It is an adversarial search algorithm used commonly for machine playing of two-playercombinatorial games(Tic-tac-toe,Chess,Connect 4, etc.). It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.[1] John McCarthy during theDartmouth Workshopmet Alex Bernstein ofIBM, who was writing a chess program. McCarthy invented alpha–beta search and recommended it to him, but Bernstein was "unconvinced".[2] Allen NewellandHerbert A. Simonwho used whatJohn McCarthycalls an "approximation"[3]in 1958 wrote that alpha–beta "appears to have been reinvented a number of times".[4]Arthur Samuelhad an early version for a checkers simulation. Richards, Timothy Hart, Michael Levin and/or Daniel Edwards also invented alpha–beta independently in theUnited States.[5]McCarthy proposed similar ideas during theDartmouth workshopin 1956 and suggested it to a group of his students includingAlan Kotokat MIT in 1961.[6]Alexander Brudnoindependently conceived the alpha–beta algorithm, publishing his results in 1963.[7]Donald Knuthand Ronald W. Moore refined the algorithm in 1975.[8][9]Judea Pearlproved its optimality in terms of the expected running time for trees with randomly assigned leaf values in two papers.[10][11]The optimality of the randomized version of alpha–beta was shown by Michael Saks and Avi Wigderson in 1986.[12] Agame treecan represent many two-playerzero-sum games, such as chess, checkers, and reversi. Each node in the tree represents a possible situation in the game. Each terminal node (outcome) of a branch is assigned a numeric score that determines the value of the outcome to the player with the next move.[13] The algorithm maintains two values, alpha and beta, which respectively represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of. Initially, alpha is negative infinity and beta is positive infinity, i.e. both players start with their worst possible score. Whenever the maximum score that the minimizing player (i.e. the "beta" player) is assured of becomes less than the minimum score that the maximizing player (i.e., the "alpha" player) is assured of (i.e. beta < alpha), the maximizing player need not consider further descendants of this node, as they will never be reached in the actual play. To illustrate this with a real-life example, suppose somebody is playing chess, and it is their turn. Move "A" will improve the player's position. The player continues to look for moves to make sure a better one hasn't been missed. Move "B" is also a good move, but the player then realizes that it will allow the opponent to force checkmate in two moves. Thus, other outcomes from playing move B no longer need to be considered since the opponent can force a win. The maximum score that the opponent could force after move "B" is negative infinity: a loss for the player. This is less than the minimum position that was previously found; move "A" does not result in a forced loss in two moves. The benefit of alpha–beta pruning lies in the fact that branches of the search tree can be eliminated.[13]This way, the search time can be limited to the 'more promising' subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to thebranch and boundclass of algorithms. The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node). With an (average or constant)branching factorofb, and a search depth ofdplies, the maximum number of leaf node positions evaluated (when the move ordering ispessimal) isO(bd) – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves are always searched first), the number of leaf node positions evaluated is aboutO(b×1×b×1×...×b) for odd depth andO(b×1×b×1×...×1) for even depth, orO(bd/2)=O(bd){\displaystyle O(b^{d/2})=O({\sqrt {b^{d}}})}. In the latter case, where the ply of a search is even, the effective branching factor is reduced to itssquare root, or, equivalently, the search can go twice as deep with the same amount of computation.[14]The explanation ofb×1×b×1×... is that all the first player's moves must be studied to find the best one, but for each, only the second player's best move is needed to refute all but the first (and best) first player move—alpha–beta ensures no other second player moves need be considered. When nodes are considered in a random order (i.e., the algorithm randomizes), asymptotically, the expected number of nodes evaluated in uniform trees with binary leaf-values isΘ(((b−1+b2+14b+1)/4)d){\displaystyle \Theta (((b-1+{\sqrt {b^{2}+14b+1}})/4)^{d})}.[12]For the same trees, when the values are assigned to the leaf values independently of each other and say zero and one are both equally probable, the expected number of nodes evaluated isΘ((b/2)d){\displaystyle \Theta ((b/2)^{d})}, which is much smaller than the work done by the randomized algorithm, mentioned above, and is again optimal for such random trees.[10]When the leaf values are chosen independently of each other but from the[0,1]{\displaystyle [0,1]}interval uniformly at random, the expected number of nodes evaluated increases toΘ(bd/log(d)){\displaystyle \Theta (b^{d/log(d)})}in thed→∞{\displaystyle d\to \infty }limit,[11]which is again optimal for this kind of random tree. Note that the actual work for "small" values ofd{\displaystyle d}is better approximated using0.925d0.747{\displaystyle 0.925d^{0.747}}.[11][10] A chess program that searches four plies with an average of 36 branches per node evaluates more than one million terminal nodes. An optimal alpha-beta prune would eliminate all but about 2,000 terminal nodes, a reduction of 99.8%.[13] Normally during alpha–beta, thesubtreesare temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try to find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as throughiterative deepening. Additionally, this algorithm can be trivially modified to return an entireprincipal variationin addition to the score. Some more aggressive algorithms such asMTD(f)do not easily permit such a modification. The pseudo-code for depth limited minimax with alpha–beta pruning is as follows:[15] Implementations of alpha–beta pruning can often be delineated by whether they are "fail-soft," or "fail-hard". The pseudo-code illustrates the fail-soft variation. With fail-soft alpha–beta, the alphabeta function may return values (v) that exceed (v < α or v > β) the α and β bounds set by its function call arguments. In comparison, fail-hard alpha–beta limits its function return value into the inclusive range of α and β. Further improvement can be achieved without sacrificing accuracy by using orderingheuristicsto search earlier parts of the tree that are likely to force alpha–beta cutoffs. For example, in chess, moves that capture pieces may be examined before moves that do not, and moves that have scored highly inearlier passesthrough the game-tree analysis may be evaluated before others. Another common, and very cheap, heuristic is thekiller heuristic, where the last move that caused a beta-cutoff at the same tree level in the tree search is always examined first. This idea can also be generalized into a set ofrefutation tables. Alpha–beta search can be made even faster by considering only a narrow search window (generally determined by guesswork based on experience). This is known as anaspiration window. In the extreme case, the search is performed with alpha and beta equal; a technique known aszero-window search,null-window search, orscout search. This is particularly useful for win/loss searches near the end of a game where the extra depth gained from the narrow window and a simple win/loss evaluation function may lead to a conclusive result. If an aspiration search fails, it is straightforward to detect whether it failedhigh(high edge of window was too low) orlow(lower edge of window was too high). This gives information about what window values might be useful in a re-search of the position. Over time, other improvements have been suggested, and indeed the Falphabeta (fail-soft alpha–beta) idea of John Fishburn is nearly universal and is already incorporated above in a slightly modified form. Fishburn also suggested a combination of the killer heuristic and zero-window search under the name Lalphabeta ("last move with minimal window alpha–beta search"). Since theminimaxalgorithm and its variants are inherentlydepth-first, a strategy such asiterative deepeningis usually used in conjunction with alpha–beta so that a reasonably good move can be returned even if the algorithm is interrupted before it has finished execution. Another advantage of using iterative deepening is that searches at shallower depths give move-ordering hints, as well as shallow alpha and beta estimates, that both can help produce cutoffs for higher depth searches much earlier than would otherwise be possible. Algorithms likeSSS*, on the other hand, use thebest-firststrategy. This can potentially make them more time-efficient, but typically at a heavy cost in space-efficiency.[16]
https://en.wikipedia.org/wiki/Alpha-beta_pruning
Zobrist hashing(also referred to asZobrist keysorZobrist signatures[1]) is ahash functionconstruction used incomputer programsthat playabstract board games, such aschessandGo, to implementtransposition tables, a special kind ofhash tablethat is indexed by a board position and used to avoid analyzing the same position more than once. Zobrist hashing is named for its inventor,Albert Lindsey Zobrist.[2]It has also been applied as a method for recognizing substitutional alloy configurations in simulations of crystalline materials.[3]Zobrist hashing is the first known instance of the generally useful underlying technique calledtabulation hashing. Zobrist hashing starts byrandomly generatingbitstringsfor each possible element of a board game, i.e. for each combination of a piece and a position (in the game of chess, that's 6 pieces × 2 colors × 64 board positions, with a constant number of additional bitstrings forcastling rights, pawns that may captureen passant, and which player moves next).[4]Now any board configuration can be broken up into independent piece/position components, which are mapped to the random bitstrings generated earlier. The final Zobrist hash is computed by combining those bitstrings using bitwiseXOR. Example pseudocode for the game of chess:[citation needed] If the bitstrings are long enough, different board positions will almost certainly hash to different values; however longer bitstrings require proportionally more computer resources to manipulate. The most commonly used bitstring (key) length is 64 bits.[1]Many game engines store only the hash values in the transposition table, omitting the position information itself entirely to reduce memory usage, and assuming thathash collisionswill not occur, or will not greatly influence the results of the table if they do. Zobrist hashing is the first known instance oftabulation hashing. The result is a3-wise independent hash family. In particular, it is stronglyuniversal. As an example, inchess, at any one time each of the 64 squares can at any time be empty, or contain one of the 6 game pieces, which are either black or white. Also, it can be either black's turn to play or white's turn to play. Thus one needs to generate 6 x 2 x 64 + 1 = 769 random bitstrings. Given a position, one obtains its Zobrist hash by finding out which pieces are on which squares, and combining the relevant bitstrings together. If the position is black to move, the black-to-move bitstring is also included in the Zobrist hash.[1] Rather than computing the hash for the entire board every time, as the pseudocode above does, the hash value of a board can be incrementally updated simply by XORing out the bitstring(s) for positions that have changed, and XORing in the bitstrings for the new positions.[1]For instance, if a pawn on a chessboard square is replaced by arookfrom another square, the resulting position would be produced by XORing the existing hash with the bitstrings for: This makes Zobrist hashing very efficient for traversing agame tree. Incomputer Go, this technique is also used forsuperkodetection. More generically, Zobrist hashing can be applied over finitesetsof elements (in the chess example, these elements are(piece,position){\displaystyle (piece,position)}tuples), as long as a random bitstring can be assigned to each possible element. This can be either done with arandom number generatorfor smaller element spaces, or with ahash functionfor larger ones. This method has been used to recognizesubstitutional alloyconfigurations duringMonte Carlo simulationsin order to prevent wasting computational effort on states that have already been calculated.[3]
https://en.wikipedia.org/wiki/Zobrist_hashing
Incryptography, theGeneralized DES Scheme(GDESorG-DES) is a variant of theDESsymmetric-keyblock cipherdesigned with the intention of speeding up theencryptionprocess while improving its security. The scheme was proposed by Ingrid Schaumuller-Bichl in 1981. In 1990,Eli BihamandAdi Shamirshowed that GDES was vulnerable todifferential cryptanalysis, and that any GDES variant faster than DES is also less secure than DES. GDES generalizes theFeistel networkstructure of DES to largerblock sizes. In each round, the DES round function is applied to the rightmost 32-bit subblock, and the result isXORedwith all the other parts. Then the block is rotated 32 bits to the right.
https://en.wikipedia.org/wiki/G-DES
Thexor–encrypt–xor(XEX) is a (tweakable)mode of operation of a block cipher. In tweaked-codebook mode withciphertext stealing(XTS mode), it is one of the more popular modes of operation forwhole-disk encryption. XEX is also a common form ofkey whitening, and part of somesmart cardproposals.[1][2] In 1984, to protect DES against exhaustive search attacks,Ron RivestproposedDESX: XOR a pre-whiteningkey to the plaintext, encrypt the result with DES using a secret key, and then XOR a postwhitening key to the encrypted result to produce the final ciphertext.[3] In 1991, motivated by Rivest's DESX construction, Even and Mansour proposed a much simpler scheme (the "two-key Even–Mansour scheme"), which they suggested was perhaps the simplest possible block cipher: XOR the plaintext with a prewhitening key, apply a publicly known unkeyed permutation (in practice, apseudorandom permutation) to the result, and then XOR a postwhitening key to the permuted result to produce the final ciphertext.[3][4] Studying simple Even–Mansour style block ciphers gives insight into the security ofFeistel ciphers(DES-like ciphers) and helps understandblock cipherdesign in general.[5] Orr Dunkelman, Nathan Keller, and Adi Shamir later proved it was possible to simplify the Even–Mansour scheme even further and still retain the same provable security, producing the "single-key Even–Mansour scheme": XOR the plaintext with the key, apply a publicly known unkeyed permutation to the result, and then XOR the same key to the permuted result to produce the final ciphertext.[3][6] In 2004, Rogaway presented the XEX scheme with key and location-dependent "tweaks":[7] Rogaway used XEX to allow efficient processing of consecutive blocks (with respect to the cipher used) within one data unit (e.g., a disk sector) forwhole-disk encryption.[7] Many whole-disk encryption systems –BestCrypt,dm-crypt,FreeOTFE,TrueCrypt,DiskCryptor, FreeBSD'sgeli, OpenBSD softraid disk encryption software, and Mac OS X Lion'sFileVault2 – support XEX-based tweaked-codebook mode with ciphertext stealing (XTS mode). This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Xor%E2%80%93encrypt%E2%80%93xor
Horst Feistel(January 30, 1915[1]– November 14, 1990) was a German-Americancryptographerwho worked on the design ofciphersatIBM, initiating research that culminated in the development of theData Encryption Standard(DES) in the 1970s. The structure used in DES, called aFeistel network, is commonly used in manyblock ciphers.[2][3][4] Feistel was born inBerlin,Germanyin 1915, and moved to theUnited Statesin 1934. DuringWorld War II, he was placed under house arrest, but gained US citizenship on 31 January 1944. The following day he was granted a security clearance and began work for the USAir Force Cambridge Research Center(AFCRC) onIdentification Friend or Foe(IFF) devices until the 1950s. He was subsequently employed atMIT'sLincoln Laboratory, then theMITREcorporation. In 1968, Feistel became a Research Staff Member at theIBMT.J Watson Center.[5]During his time there he received an award for his cryptographic work. In 1971, he patented theblock ciphercryptographic systemat IBM.[5]His research atIBMled to the development of theLuciferandData Encryption Standard(DES) ciphers. Feistel was one of the earliest non-government researchers to study the design and theory ofblock ciphers. Feistel lent his name to theFeistel networkconstruction, a common method for constructing block ciphers (for example DES). Feistel obtained abachelor's degreeatMIT, and hismaster'satHarvard, both inphysics. He married Leona (Gage) in 1945, with whom he had a daughter, Peggy.
https://en.wikipedia.org/wiki/Horst_Feistel
Identity-based cryptographyis a type ofpublic-key cryptographyin which a publicly known string representing an individual or organization is used as apublic key. The public string could include an email address, domain name, or a physical IP address. The first implementation of identity-based signatures and an email-address basedpublic-key infrastructure(PKI) was developed byAdi Shamirin 1984,[1]which allowed users to verifydigital signaturesusing only public information such as the user's identifier. Under Shamir's scheme, a trusted third party would deliver the private key to the user after verification of the user's identity, with verification essentially the same as that required for issuing acertificatein a typical PKI. Shamir similarly proposedidentity-based encryption, which appeared particularly attractive since there was no need to acquire an identity's public key prior to encryption. However, he was unable to come up with a concrete solution, and identity-based encryption remained an open problem for many years. The first practical implementations were finally devised by Sakai in 2000,[2]and Boneh and Franklin in 2001.[3]These solutions were based onbilinear pairings. Also in 2001, a solution was developed independently byClifford Cocks.[4][5] Closely related to various identity-based encryption schemes are identity based key agreement schemes. One of the first identity based key agreement algorithms was published in 1986, just two years after Shamir's identity based signature. The author was E. Okamoto.[6]Identity based key agreement schemes also allow for "escrow free" identity based cryptography. A notable example of such an escrow free identity based key agreement is the McCullagh-Barreto's "Authenticated Key Agreement without Escrow" found in section 4 of their 2004 paper, "A New Two-Party Identity-Based Authenticated Key Agreement".[7]A variant of this escrow free key exchange is standardized as the identity based key agreement in the Chinese identity based standardSM9. Identity-based systems allow any party to generate a public key from a known identity value, such as an ASCII string. A trusted third party, called the private key generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the correspondingmaster private key(referred to asmaster key). Given the master public key, any party can compute a public key corresponding to the identityIDby combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identityIDcontacts the PKG, which uses the master private key to generate the private key for the identityID. Identity-based systems have a characteristic problem in operation. Suppose Alice and Bob are users of such a system. Since the information needed to find Alice's public key is completely determined by Alice's ID and the master public key, it is not possible to revoke Alice's credentials and issue new credentials without either (a) changing Alice's ID (usually a phone number or an email address which will appear in a corporate directory); or (b) changing the master public key and re-issuing private keys to all users, including Bob.[8] This limitation may be overcome by including a time component (e.g. the current month) in the identity.[8]
https://en.wikipedia.org/wiki/Identity-based_cryptography
Identity-based conditional proxy re-encryption(IBCPRE) is a type ofproxy re-encryption(PRE) scheme in theidentity-based public key cryptographic setting.[1]An IBCPRE scheme is a natural extension of proxy re-encryption on two aspects. The first aspect is to extend the proxy re-encryption notion to the identity-based public key cryptographic setting. The second aspect is to extend the feature set of proxy re-encryption to support conditional proxy re-encryption. By conditional proxy re-encryption, a proxy can use an IBCPRE scheme to re-encrypt aciphertextbut the ciphertext would only be well-formed for decryption if a condition applied onto the ciphertext together with the re-encryption key is satisfied. This allows fine-grained proxy re-encryption and can be useful for applications such as secure sharing over encrypted cloud data storage. Apublic-key encryptionscheme allows anyone who has the public key of a receiver to encrypt messages to the receiver using the public key in such a way that only the corresponding private key known only to the receiver can decrypt and recover the messages. The public key of a user, therefore, can be published for allowing everyone to use it for encrypting messages to the user while the private key of the user has to be kept secret for the decryption purpose. Both the public key and the corresponding private key of the user are generated by the user in general.[2] Under the identity-based cryptographic setting, the public key of the user can be an arbitrary string of bits, provided that the string can uniquely identify the user in the system. The unique string, for example, can be anemail address, a phone number, and a staff ID (if used only internally within an organization). However, the corresponding private key is no longer generated by the user. From the public key, which is a unique binary string, there is a key generation center (KGC), which generates and issues the private key to the user. The KGC has a public key, which is assumed to be publicly known, and the encryption and decryption then work under the unique binary string defined public key and the corresponding private key, respectively, with respect to the KGC’s public key. Proxy re-encryption allows a ciphertext, which originally can only be decrypted by a user, to be transformed by a public entity, called proxy, to another ciphertext so that another user can also decrypt. Suppose the two users areAlice and Bob. Alice has some messages:M1,M2, …Mn. She intends to encrypt them under her public key, and then upload the encrypted messages to some server. Now when Alice wants to share these n encrypted messages with Bob, Alice can use a proxy re-encryption scheme to allow the server to re-encrypt these n encrypted messages so that Bob can decrypt these re-encrypted messages directly using his own private key. To do so in the proxy re-encryption scheme, Alice uses her private key and the public key of Bob to generate a re-encryption key. Alice then sends the re-encryption key to the server. Upon receiving this re-encryption key, the server uses the key to transform all the n encrypted messagesC1,C2, …,Cnto a new form denoted asD1,D2, …,Dn. Bob can then downloadD1,D2, …,Dn, decrypt them, and recover the messagesM1,M2, …Mnusing his private key. In an identity-based conditional proxy re-encryption (IBCPRE) system, users set their public keys as unique identities of the users. One of the main advantages of using identity-based cryptographic algorithms is the elimination ofpublic key certificates, which can help enhance the usability of the target security applications. The term ‘Conditional’ in IBCPRE refers to an additional feature, which allows each encrypted message to have a ‘tag’ associated with. In addition to the tag, each re-encryption key also has a ‘tag’ attached. The IBCPRE is designed so that only if the tag of an encrypted message matches with the tag of a re-encryption key can the encrypted message be re-encrypted. One of the key features of IBCPRE is that when a data owner encrypts messages, the encryption is done for themselves and only they themselves can decrypt the encrypted messages using their secret key. There is no need for them to know in advance about who that they would like to share the encrypted messages with. In other words, picking the friends to share with by them can be done after they encrypt the messages and uploads them to the server. Another feature of IBCPRE is that it supportsend-to-end encryption. The server which stores the encrypted messages cannot decrypt the messages both before and after the re-encryption. IBCPRE supports one-to-many encryption. The data owner can choose multiple friends to share their data with. For multiple friends to share the encrypted messages with, the owner simply needs to generate a re-encryption key for each of their friends and send all the re-encryption keys to the server for carrying out the re-encryption. The number of re-encryption keys that they need to generate depends on the number of friends that they want to share the encrypted messages with. It does not depend on the number of encrypted messages. One re-encryption key will allow the server to convert all the encrypted messages, provided the tag of the encrypted messages and the tag of the re-encryption key matches. The conditional ‘tag’ of the IBCPRE facilitates the fine-grained access of encrypted messages. By setting different tag values onto different encrypted messages, the data owner can control the exact set of encrypted messages that they want to share with any particular friends of theirs, with great flexibility. Consider a user Alice who encrypts some messages M1, M2, …, Mtwith a tag ‘Private’, Mt+1, Mt+2, …, Mmwith a tag ‘toShareWithFamily’, Mm+1, Mm+2, …, Mnwith a tag ‘toShareWithFriend’, using IBCPRE under her unique identity, which is considered as the public key of Alice. Alice then uploads the corresponding encrypted messages C1, C2, …, Ct, Ct+1, …, Cm, Cm+1, …, Cnto a server. When Alice is about to share Mm+1, Mm+2, …, Mnwith another user Bob, who becomes her friend recently, Alice generates a re-encryption key using IBCPRE with an associated tag ‘toShareWithFriend’. This generation is done by taking as input Alice’s private key and Bob’s identity. Then Alice sends the re-encryption key to the server. By using the re-encryption key, the server runs the IBCPRE re-encryption function on Cm+1, Cm+2, …, Cnfor transforming them into another form, Dm+1, Dm+2, …, Dnso that Bob can decrypt them directly using his private key. This transformation can be done as the tag associated with the encrypted messages, namely ‘toShareWithFriend’, matches with the tag associated with the re-encryption key. Note that the server cannot transform C1, C2, …, Ct, Ct+1, …, Cmto another form for Bob to decrypt using the re-encryption key because the tag of these m encrypted messages, namely ‘Private’ or 'toShareWithFamily', does not match with the tag of the re-encryption key. Also note that the server cannot retrieve any of the messages at any time. IBCPRE has been used for secure cloud data sharing and relatedkey managementsolutions in products ofAtCipher Limited. A related concept to proxy re-encryption called decrypt right delegation was introduced by Mambo and Okamoto[3]in 1997. Then in 1998, Blaze, Bleumer and Strauss[4]formalized the notion of proxy re-encryption by giving a definition to the set of algorithms of a proxy re-encryption scheme. The authors also proposed a scheme for achievingchosen-plaintext security (CPA-security). Later on, various PRE schemes have been proposed.[5][6][7][8][9][10][11][12] In 2007, Green and Ateniese[13]and Ivan and Dodis[9]independently proposed several proxy re-encryption schemes in the identity-based cryptographic setting. This type of scheme is usually called identity-based proxy re-encryption (IBPRE). The schemes are unidirectional, namely, the re-encryption key is for one party to re-encrypt cipher-texts to another party, but not vice versa. A new re-encryption key has to be generated for the other direction of re-encryption. In terms of security, the security analyses of the schemes have been done in therandom oracle model. One isCPA-secure, multi-hop and the other ischosen-ciphertext-attack-secure (CCA-secure), single-hop. The schemes, however, are notcollusion resistant. This means that if a proxy colludes with the corresponding delegatee, the private key of the delegator will be compromised. CPA-secure IBPRE schemes secure without random oracles were subsequently proposed by Matsuo[14]and Mizuno and Doi.[15] Type-based PRE[16]and conditional PRE (CPRE)[17]are designed to ensure that the proxy can re-encrypt a ciphertext tagged with a specific condition only if the re-encryption key given by the delegator is tagged with the same condition. Two identity-based CPRE (IBCPRE) schemes were proposed to achieve conditional control in both re-encryption and identity-based re-encryption by Liang et al.,[18]and achieved CCA security in thestandard model, and the other by Shao et al.[19]and achieved CCA security in the random oracle model.
https://en.wikipedia.org/wiki/Identity-based_conditional_proxy_re-encryption
Attribute-based encryptionis a generalisation ofpublic-key encryptionwhich enables fine grained access control of encrypted data usingauthorisation policies. Thesecret keyof a user and the ciphertext are dependent upon attributes (e.g. their email address, the country in which they live, or the kind of subscription they have). In such a system, the decryption of a ciphertext is possible only if the set of attributes of the user key matches the attributes of the ciphertext.[1] A crucial security aspect of attribute-based encryption is collusion-resistance: An adversary that holds multiple keys should only be able to access data if at least one individual key grants access. Attribute-based encryption is provably[2]a generalisation ofidentity-based encryption. Identity-based encryption was first proposed in 1984 byAdi Shamir,[3]without a specific solution or proof. In 2004Amit SahaiandBrent Waters[4]published a solution, improved in 2006 by Vipul Goyal, Omkant Pandey, Amit Sahai and Brent Waters.[5]Melissa Chaseand other researchers have further proposed attribute-based encryption with multiple authorities who jointly generate users' private keys.[6][7][8][9][10][11] There are mainly two types of attribute-based encryption schemes: Key-policy attribute-based encryption (KP-ABE)[5]and ciphertext-policy attribute-based encryption (CP-ABE).[12] In KP-ABE, users' secret keys are generated based on an access tree that defines the privileges scope of the concerned user, and data are encrypted over a set of attributes. However, CP-ABE uses access trees to encrypt data and users' secret keys are generated over a set of attributes. The related concept ofrole-based encryption[13]refers exclusively to access keys having roles that can be validated against an authoritative store of roles. In this sense, Role-based encryption can be expressed by Attribute-based encryption and within that limited context the two terms can be used interchangeably. Role-based Encryption cannot express Attribute-based encryption. Attribute-based encryption (ABE) can be used for log encryption.[14]Instead of encrypting each part of a log with the keys of all recipients, it is possible to encrypt the log only with attributes which match recipients' attributes. This primitive can also be used forbroadcast encryptionin order to decrease the number of keys used.[15]Attribute-based encryption methods are also widely employed in vector-driven search engine interfaces.[16] Although the ABE concept is very powerful and a promising mechanism, ABE systems suffer mainly from two drawbacks: inefficiency and the lack of a straightforward attribute revocation mechanism. Other main challenges are: Revocation of users in cryptosystems is a well-studied but nontrivial problem. Revocation is even more challenging in attribute-based systems, given that each attribute possibly belongs to multiple different users, whereas in traditional PKI systems public/private key pairs are uniquely associated with a single user. In principle, in an ABE system, attributes, not users or keys, are revoked. The following paragraph now discusses how the revocation feature can be incorporated. A simple but constrained solution is to include a time attribute. This solution would require each message to be encrypted with a modified access treeT0, which is constructed by augmenting the original access treeTwith an additional time attribute. The time attribute,ζrepresents the current ‘time period’. Formally, the new access structureT0is as follows:{{{1}}}. For example,ζcan be the ‘date’ attribute whose value changes once every day. It is assumed that each non-revoked user receives his fresh private keys corresponding to the ‘date’ attribute once each day directly from the mobile key server MKS (which is the central authority) or via the regional delegates. With a hierarchical access structure, the key delegation property of CP-ABE can be exploited to reduce the dependency on the central authority for issuing the new private keys to all users every time interval. There are significant trade-offs between the extra load incurred by the authority for generating and communicating the new keys to the users and the amount of time that can elapse before a revoked user can be effectively purged. This above solution has the following problems: A manuscript of Ari Juels and Michael Szydlo[17]dated 2004 proposed a different, non-collusion-resistant notion of attribute-based encryption.
https://en.wikipedia.org/wiki/Attribute-based_encryption
Akeyincryptographyis a piece of information, usually a string of numbers or letters that are stored in a file, which, when processed through a cryptographicalgorithm, canencodeor decode cryptographic data. Based on the used method, the key can be different sizes and varieties, but in all cases, the strength of the encryption relies on the security of the key being maintained. A key'ssecurity strengthis dependent on its algorithm, the size of the key, the generation of the key, and the process of key exchange. The key is what is used to encrypt data fromplaintexttociphertext.[1]There are different methods for utilizing keys and encryption. Symmetric cryptographyrefers to the practice of the same key being used for both encryption and decryption.[2] Asymmetric cryptographyhas separate keys for encrypting and decrypting.[3][4]These keys are known as the public and private keys, respectively.[5] Since the key protects the confidentiality and integrity of the system, it is important to be kept secret from unauthorized parties. With public key cryptography, only the private key must be kept secret, but with symmetric cryptography, it is important to maintain the confidentiality of the key.Kerckhoff's principlestates that the entire security of the cryptographic system relies on the secrecy of the key.[6] Key sizeis the number ofbitsin the key defined by the algorithm. This size defines the upper bound of the cryptographic algorithm's security.[7]The larger the key size, the longer it will take before the key is compromised by a brute force attack. Since perfect secrecy is not feasible for key algorithms, researches are now more focused on computational security. In the past, keys were required to be a minimum of 40 bits in length, however, as technology advanced, these keys were being broken quicker and quicker. As a response, restrictions on symmetric keys were enhanced to be greater in size. Currently, 2048 bitRSA[8]is commonly used, which is sufficient for current systems. However, current RSA key sizes would all be cracked quickly with a powerful quantum computer.[9] "The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher."[10] To prevent a key from being guessed, keys need to be generated randomly and contain sufficiententropy. The problem of how to safely generate random keys is difficult and has been addressed in many ways by various cryptographic systems. A key can directly be generated by using the output of a Random Bit Generator (RBG), a system that generates a sequence of unpredictable and unbiased bits.[11]A RBG can be used to directly produce either a symmetric key or the random output for an asymmetric key pair generation. Alternatively, a key can also be indirectly created during a key-agreement transaction, from another key or from a password.[12] Some operating systems include tools for "collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high-quality randomness. The security of a key is dependent on how a key is exchanged between parties. Establishing a secured communication channel is necessary so that outsiders cannot obtain the key. A key establishment scheme (or key exchange) is used to transfer an encryption key among entities. Key agreement and key transport are the two types of a key exchange scheme that are used to be  remotely exchanged between entities . In a key agreement scheme, a secret key, which is used between the sender and the receiver to encrypt and decrypt information, is set up to be sent indirectly. All parties exchange information (the shared secret) that permits each party to derive the secret key material. In a key transport scheme, encrypted keying material that is chosen by the sender is transported to the receiver. Either symmetric key or asymmetric key techniques can be used in both schemes.[12] TheDiffie–Hellman key exchangeandRivest-Shamir-Adleman(RSA) are the most two widely used key exchange algorithms.[13]In 1976,Whitfield DiffieandMartin Hellmanconstructed theDiffie–Hellmanalgorithm, which was the first public key algorithm. TheDiffie–Hellmankey exchange protocol allows key exchange over an insecure channel by electronically generating a shared key between two parties. On the other hand,RSAis a form of the asymmetric key system which consists of three steps: key generation, encryption, and decryption.[13] Key confirmation delivers an assurance between the key confirmation recipient and provider that the shared keying materials are correct and established. TheNational Institute of Standards and Technologyrecommends key confirmation to be integrated into a key establishment scheme to validate its implementations.[12] Key managementconcerns the generation, establishment, storage, usage and replacement of cryptographic keys. Akey management system(KMS) typically includes three steps of establishing, storing and using keys. The base of security for the generation, storage, distribution, use and destruction of keys depends on successful key management protocols.[14] A password is a memorized series of characters including letters, digits, and other special symbols that are used to verify identity. It is often produced by a human user or a password management software to protect personal and sensitive information or generate cryptographic keys. Passwords are often created to be memorized by users and may contain non-random information such as dictionary words.[12]On the other hand, a key can help strengthen password protection by implementing a cryptographic algorithm which is difficult to guess or replace the password altogether. A key is generated based on random or pseudo-random data and can often be unreadable to humans.[15] A password is less safe than a cryptographic key due to its low entropy, randomness, and human-readable properties. However, the password may be the only secret data that is accessible to the cryptographic algorithm forinformation securityin some applications such as securing information in storage devices. Thus, a deterministic algorithm called akey derivation function(KDF) uses a password to generate the secure cryptographic keying material to compensate for the password's weakness. Various methods such as adding asaltor key stretching may be used in the generation.[12]
https://en.wikipedia.org/wiki/Key_(cryptography)
Incryptography, akey encapsulation mechanism(KEM) is apublic-key cryptosystemthat allows a sender to generate a short secret key and transmit it to a receiver securely, in spite ofeavesdroppingandinterceptingadversaries.[1][2][3]Modern standards forpublic-key encryptionof arbitrary messages are usually based on KEMs.[4][5] A KEM allows a sender who knows a public key to simultaneously generate a short random secret key and anencapsulationorciphertextof the secret key by the KEM'sencapsulation algorithm. The receiver who knows the private key corresponding to the public key can recover the same random secret key from the encapsulation by the KEM'sdecapsulation algorithm.[1][2][3] The security goal of a KEM is to prevent anyone whodoes notknow the private key from recovering any information about the encapsulated secret keys, even after eavesdropping or submitting other encapsulations to the receiver to study how the receiver reacts.[1][2][3] The difference between apublic-key encryptionscheme and a KEM is that a public-key encryption scheme allows a sender to choose an arbitrary message from some space of possible messages, while a KEM chooses a short secret key at random for the sender.[1][2][3] The sender may take the random secret key produced by a KEM and use it as asymmetric keyfor anauthenticated cipherwhose ciphertext is sent alongside the encapsulation to the receiver. This serves to compose a public-key encryption scheme out of a KEM and a symmetric-key authenticated cipher in ahybrid cryptosystem.[1][2][3][5] Most public-key encryption schemes such asRSAES-PKCS1-v1_5,RSAES-OAEP, andElgamal encryptionare limited to small messages[6][7]and are almost always used to encrypt a short random secret key in a hybrid cryptosystem anyway.[8][9][5]And although a public-key encryption scheme can conversely be converted to a KEM by choosing a random secret key and encrypting it as a message, it is easier to design and analyze a secure KEM than to design a secure public-key encryption scheme as a basis. So most modern public-key encryption schemes are based on KEMs rather than the other way around.[10][5] A KEM consists of three algorithms:[1][2][3][11][12] A KEM iscorrectif, for any key pair(pk,sk){\displaystyle ({\mathit {pk}},{\mathit {sk}})}generated byGen{\displaystyle \operatorname {Gen} }, decapsulating an encapsulationc{\displaystyle c}returned by(k,c):=Encap⁡(pk){\displaystyle (k,c):=\operatorname {Encap} ({\mathit {pk}})}with high probability yields the same keyk{\displaystyle k}, that is,Decap⁡(sk,c)=k{\displaystyle \operatorname {Decap} ({\mathit {sk}},c)=k}.[2][3][11][12] Securityof a KEM is quantified by itsindistinguishability against chosen-ciphertext attack, IND-CCA, which is loosely how much better an adversary can do than a coin toss to tell whether, given a random key and an encapsulation, the key is encapsulated by that encapsulation or is an independent random key.[2][3][11][12] Specifically, in the IND-CCA game: TheIND-CCA advantageof the adversary is|Pr[b′=b]−1/2|{\displaystyle \left|\Pr[b'=b]-1/2\right|}, that is, the probability beyond a fair coin toss at correctly distinguishing an encapsulated key from an independently randomly chosen key. TraditionalRSA encryption, witht{\displaystyle t}-bit moduli and exponente{\displaystyle e}, is defined as follows:[13][14][15] This naive approach is totally insecure. For example, since it is nonrandomized, it cannot be secure against evenknown-plaintext attack—an adversary can tell whether the sender is sending the messageATTACK AT DAWNversus the messageATTACK AT DUSKsimply by encrypting those messages and comparing the ciphertext. Even ifm{\displaystyle m}is always a random secret key, such as a 256-bitAESkey, whene{\displaystyle e}is chosen to optimize efficiency ase=3{\displaystyle e=3}, the messagem{\displaystyle m}can be computed from the ciphertextc{\displaystyle c}simply by taking real number cube roots, and there are many otherattacks against plain RSA.[13][14]Variousrandomized padding schemeshave been devised in attempts—sometimes failed, likeRSAES-PKCS1-v1_5[13][17][18]—to make it secure for arbitrary short messagesm{\displaystyle m}.[13][14] Since the messagem{\displaystyle m}is almost always a short secret key for asymmetric-keyauthenticated cipherused to encrypt an arbitrary bit string message, a simpler approach calledRSA-KEMis to choose an element ofZ/nZ{\displaystyle \mathbb {Z} /n\mathbb {Z} }at random and use that toderivea secret key using akey derivation functionH{\displaystyle H}, roughly as follows:[19][8] This approach is simpler to implement, and provides a tighter reduction to theRSA problem, than padding schemes likeRSAES-OAEP.[19] TraditionalElgamal encryptionis defined over a multiplicative subgroup of the finite fieldZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }with generatorg{\displaystyle g}of orderq{\displaystyle q}as follows:[20][21] This meets the syntax of a public-key encryption scheme, restricted to messages in the spaceZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }(which limits it to message of a few hundred bytes for typical values ofp{\displaystyle p}). By validating ciphertexts in decryption, it avoids leaking bits of the private keyx{\displaystyle x}through maliciously chosen ciphertexts outside the group generated byg{\displaystyle g}. However, this fails to achieveindistinguishability against chosen ciphertext attack. For example, an adversary having a ciphertextc=(c1,c2){\displaystyle c=(c_{1},c_{2})}for an unknown messagem{\displaystyle m}can trivially decrypt it by querying the decryption oracle for the distinct ciphertextc′:=(c1,c2g){\displaystyle c':=(c_{1},c_{2}g)}, yielding the related plaintextm′:=mgmodp{\displaystyle m':=mg{\bmod {p}}}, from whichm{\displaystyle m}can be recovered bym=m′g−1modp{\displaystyle m=m'g^{-1}{\bmod {p}}}.[20] Traditional Elgamal encryption can be adapted to the elliptic-curve setting, but it requires some way to reversibly encode messages as points on the curve, which is less trivial than encoding messages as integers modp{\displaystyle p}.[22] Since the messagem{\displaystyle m}is almost always a short secret key for asymmetric-keyauthenticated cipherused to encrypt an arbitrary bit string message, a simpler approach is toderivethe secret key fromt{\displaystyle t}and dispense withm{\displaystyle m}andc2{\displaystyle c_{2}}altogether, as a KEM, using akey derivation functionH{\displaystyle H}:[1] When combined with an authenticated cipher to encrypt arbitrary bit string messages, the combination is essentially theIntegrated Encryption Scheme. Since this KEM only requires a one-way key derivation function to hash random elements of the group it is defined over,Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }in this case, and not a reversible encoding of messages, it is easy to extend to more compact and efficient elliptic curve groups for the same security, as in theECIES, Elliptic Curve Integrated Encryption Scheme.
https://en.wikipedia.org/wiki/Key_encapsulation_mechanism
Neural cryptographyis a branch ofcryptographydedicated to analyzing the application ofstochasticalgorithms, especiallyartificial neural networkalgorithms, for use inencryptionandcryptanalysis. Artificial neural networksare well known for their ability to selectively explore the solution space of a given problem. This feature finds a natural niche of application in the field ofcryptanalysis. At the same time, neural networks offer a new approach to attack ciphering algorithms based on the principle that any function could be reproduced by a neural network, which is a powerful proven computational tool that can be used to find the inverse-function of any cryptographic algorithm. The ideas of mutual learning, self learning, and stochastic behavior of neural networks and similar algorithms can be used for different aspects of cryptography, likepublic-key cryptography, solving thekeydistribution problem using neural network mutual synchronization,hashingor generation ofpseudo-random numbers. Another idea is the ability of a neural network to separate space in non-linear pieces using "bias". It gives different probabilities of activating the neural network or not. This is very useful in the case of Cryptanalysis. Two names are used to design the same domain of research: Neuro-Cryptography and Neural Cryptography. The first work that it is known on this topic can be traced back to 1995 in an IT Master Thesis. In 1995, Sebastien Dourlens applied neural networks to cryptanalyzeDESby allowing the networks to learn how to invert the S-tables of the DES. The bias in DES studied through Differential Cryptanalysis byAdi Shamiris highlighted. The experiment shows about 50% of the key bits can be found, allowing the complete key to be found in a short time. Hardware application with multi micro-controllers have been proposed due to the easy implementation of multilayer neural networks in hardware.[citation needed] One example of a public-key protocol is given by Khalil Shihab[citation needed]. He describes the decryption scheme and the public key creation that are based on abackpropagationneural network. The encryption scheme and the private key creation process are based on Boolean algebra. This technique has the advantage of small time and memory complexities. A disadvantage is the property of backpropagation algorithms: because of huge training sets, the learning phase of a neural network is very long. Therefore, the use of this protocol is only theoretical so far. The most used protocol for key exchange between two partiesAandBin the practice isDiffie–Hellman key exchangeprotocol. Neural key exchange, which is based on the synchronization of two tree parity machines, should be a secure replacement for this method. Synchronizing these two machines is similar to synchronizing two chaotic oscillators inchaos communications. The tree parity machine is a special type of multi-layerfeedforward neural network. It consists of one output neuron,Khidden neurons andK×Ninput neurons. Inputs to the network take three values: The weights between input and hidden neurons take the values: Output value of each hidden neuron is calculated as a sum of all multiplications of input neurons and these weights: Signum is a simple function, which returns −1,0 or 1: If the scalar product is 0, the output of the hidden neuron is mapped to −1 in order to ensure a binary output value. The output of neural network is then computed as the multiplication of all values produced by hidden elements: Output of the tree parity machine is binary. Each party (AandB) uses its own tree parity machine. Synchronization of the tree parity machines is achieved in these steps After the full synchronization is achieved (the weights wijof both tree parity machines are same),AandBcan use their weights as keys.This method is known as a bidirectional learning.One of the following learning rules[1]can be used for the synchronization: Where: And: In every attack it is considered, that the attackerEcan eavesdrop messages between the partiesAandB, but does not have an opportunity to change them. To provide a brute force attack, an attacker has to test all possible keys (all possible values of weights wij). ByKhidden neurons,K×Ninput neurons and boundary of weightsL, this gives (2L+1)KNpossibilities. For example, the configurationK= 3,L= 3 andN= 100 gives us 3*10253key possibilities, making the attack impossible with today's computer power. One of the basic attacks can be provided by an attacker, who owns the same tree parity machine as the partiesAandB. He wants to synchronize his tree parity machine with these two parties. In each step there are three situations possible: It has been proven, that the synchronization of two parties is faster than learning of an attacker. It can be improved by increasing of the synaptic depth L of the neural network. That gives this protocol enough security and an attacker can find out the key only with small probability. For conventional cryptographic systems, we can improve the security of the protocol by increasing of the key length. In the case of neural cryptography, we improve it by increasing of the synaptic depthLof the neural networks. Changing this parameter increases the cost of a successful attack exponentially, while the effort for the users grows polynomially. Therefore, breaking the security of neural key exchange belongs to the complexity class NP. Alexander Klimov, Anton Mityaguine, and Adi Shamir say that the original neural synchronization scheme can be broken by at least three different attacks—geometric, probabilistic analysis, and using genetic algorithms. Even though this particular implementation is insecure, the ideas behind chaotic synchronization could potentially lead to a secure implementation.[2] The permutation parity machine is a binary variant of the tree parity machine.[3] It consists of one input layer, one hidden layer and one output layer. The number of neurons in the output layer depends on the number of hidden units K. Each hidden neuron has N binary input neurons: The weights between input and hidden neurons are also binary: Output value of each hidden neuron is calculated as a sum of all exclusive disjunctions (exclusive or) of input neurons and these weights: (⊕ means XOR). The functionθN(x){\displaystyle \theta _{N}(x)}is a threshold function, which returns 0 or 1: The output of neural network with two or more hidden neurons can be computed as the exclusive or of the values produced by hidden elements: Other configurations of the output layer for K>2 are also possible.[3] This machine has proven to be robust enough against some attacks[4]so it could be used as a cryptographic mean, but it has been shown to be vulnerable to a probabilistic attack.[5] Aquantum computeris a device that uses quantum mechanisms for computation. In this device the data are stored as qubits (quantum binary digits). That gives a quantum computer in comparison with a conventional computer the opportunity to solve complicated problems in a short time, e.g. discrete logarithm problem or factorization. Algorithms that are not based on any of these number theory problems are being searched because of this property. Neural key exchange protocol is not based on any number theory. It is based on the difference between unidirectional and bidirectional synchronization of neural networks. Therefore, something like the neural key exchange protocol could give rise to potentially faster key exchange schemes.[2]
https://en.wikipedia.org/wiki/Neural_cryptography#Neural_key_exchange_protocol
Quantum key distribution(QKD) is asecure communicationmethod that implements acryptographic protocolinvolving components ofquantum mechanics. It enables two parties to produce a sharedrandomsecretkeyknown only to them, which then can be used to encrypt and decryptmessages. The process of quantum key distribution is not to be confused withquantum cryptography, as it is the best-known example of a quantum-cryptographic task. An important and unique property of quantum key distribution is the ability of the two communicating users to detect the presence of any third party trying to gainknowledgeof the key. This results from a fundamental aspect of quantum mechanics: the process of measuring aquantum systemin general disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies. By usingquantum superpositionsorquantum entanglementand transmitting information inquantum states, a communication system can be implemented that detects eavesdropping. If the level of eavesdropping is below a certain threshold, a key can be produced that is guaranteed to be secure (i.e., the eavesdropper has no information about it). Otherwise no secure key is possible, and communication is aborted. The security of encryption that uses quantum key distribution relies on the foundations of quantum mechanics, in contrast to traditionalpublic key cryptography, which relies on the computational difficulty ofcertain mathematical functions, and cannot provide any mathematical proof as to the actual complexity of reversing the one-way functions used. QKD has provable security based oninformation theory, andforward secrecy. The main drawback of quantum-key distribution is that it usually relies on having anauthenticated classical channelof communication.[citation needed]In modern cryptography, having an authenticated classical channel means that one already has exchanged either asymmetric keyof sufficient length or public keys of sufficient security level. With such information already available, in practice one can achieve authenticated and sufficiently secure communication without using QKD, such as by using theGalois/Counter Modeof theAdvanced Encryption Standard. Thus QKD does the work of astream cipherat many times the cost. Quantum key distribution is used to produce and distribute only a key, not to transmit any message data. This key can then be used with any chosenencryption algorithmto encrypt (and decrypt) a message, which can then be transmitted over a standardcommunication channel. The algorithm most commonly associated with QKD is theone-time pad, as it isprovably securewhen used with a secret, random key.[1]In real-world situations, it is often also used with encryption usingsymmetric key algorithmslike theAdvanced Encryption Standardalgorithm. Quantum communication involves encoding information in quantum states, orqubits, as opposed to classical communication's use ofbits. Usually,photonsare used for these quantum states. Quantum key distribution exploits certain properties of these quantum states to ensure its security. There are several different approaches to quantum key distribution, but they can be divided into two main categories depending on which property they exploit. These two approaches can each be further divided into three families of protocols: discrete variable, continuous variable and distributed phase reference coding. Discrete variable protocols were the first to be invented, and they remain the most widely implemented. The other two families are mainly concerned with overcoming practical limitations of experiments. The two protocols described below both use discrete variable coding. This protocol, known asBB84after its inventors and year of publication, was originally described usingphoton polarizationstates to transmit the information.[2]However, any two pairs ofconjugatestates can be used for the protocol, and manyoptical-fibre-based implementations described as BB84 use phase encoded states. The sender (traditionally referred to asAlice) and the receiver (Bob) are connected by aquantum communication channelwhich allowsquantum statesto be transmitted. In the case of photons this channel is generally either an optical fibre or simplyfree space. In addition they communicate via a public classical channel, for example using broadcast radio or the internet. The protocol is designed with the assumption that aneavesdropper(referred to as Eve) can interfere in any way with the quantum channel, while the classical channel needs to beauthenticated.[3][4] The security of the protocol comes from encoding the information innon-orthogonal states.Quantum indeterminacymeans that these states cannot in general be measured without disturbing the original state (seeNo-cloning theorem). BB84 uses two pairs of states, with each pairconjugateto the other pair, and the two states within a pair orthogonal to each other. Pairs of orthogonal states are referred to as abasis. The usual polarization state pairs used are either therectilinear basisof vertical (0°) and horizontal (90°), thediagonal basisof 45° and 135° or thecircular basisof left- and right-handedness. Any two of these bases are conjugate to each other, and so any two can be used in the protocol. Below the rectilinear and diagonal bases are used. The first step in BB84 is quantum transmission. Alice creates a randombit(0 or 1) and then randomly selects one of her two bases (rectilinear or diagonal in this case) to transmit it in. She then prepares a photon polarization state depending both on the bit value and basis, as shown in the adjacent table. So for example a 0 is encoded in the rectilinear basis (+) as a vertical polarization state, and a 1 is encoded in the diagonal basis (x) as a 135° state. Alice then transmits a single photon in the state specified to Bob, using the quantum channel. This process is then repeated from the random bit stage, with Alice recording the state, basis and time of each photon sent. According to quantum mechanics (particularly quantum indeterminacy), no possible measurement distinguishes between the 4 different polarization states, as they are not all orthogonal. The only possible measurement is between any two orthogonal states (an orthonormal basis). So, for example, measuring in the rectilinear basis gives a result of horizontal or vertical. If the photon was created as horizontal or vertical (as a rectilineareigenstate) then this measures the correct state, but if it was created as 45° or 135° (diagonal eigenstates) then the rectilinear measurement instead returns either horizontal or vertical at random. Furthermore, after this measurement the photon is polarized in the state it was measured in (horizontal or vertical), with all information about its initial polarization lost. As Bob does not know the basis the photons were encoded in, all he can do is to select a basis at random to measure in, either rectilinear or diagonal. He does this for each photon he receives, recording the time, measurement basis used and measurement result. After Bob has measured all the photons, he communicates with Alice over the public classical channel. Alice broadcasts the basis each photon was sent in, and Bob the basis each was measured in. They both discard photon measurements (bits) where Bob used a different basis, which is half on average, leaving half the bits as a shared key. To check for the presence of an eavesdropper, Alice and Bob now compare a predetermined subset of their remaining bit strings. If a third party (usually referred to as Eve, for "eavesdropper") has gained any information about the photons' polarization, this introduces errors in Bob's measurements. Other environmental conditions can cause errors in a similar fashion. If more thanp{\displaystyle p}bits differ they abort the key and try again, possibly with a different quantum channel, as the security of the key cannot be guaranteed.p{\displaystyle p}is chosen so that if the number of bits known to Eve is less than this,privacy amplificationcan be used to reduce Eve's knowledge of the key to an arbitrarily small amount at the cost of reducing the length of the key. Artur Ekert's scheme[5]uses entangled pairs of photons. These can be created by Alice, by Bob, or by some source separate from both of them, including eavesdropper Eve. The photons are distributed so that Alice and Bob each end up with one photon from each pair. The scheme relies on two properties of entanglement. First, the entangled states are perfectly correlated in the sense that if Alice and Bob both measure whether their particles have vertical or horizontal polarizations, they always get the same answer with 100% probability. The same is true if they both measure any other pair of complementary (orthogonal) polarizations. This necessitates that the two distant parties have exact directionality synchronization. However, the particular results are completely random; it is impossible for Alice to predict if she (and thus Bob) will get vertical polarization or horizontal polarization. Second, any attempt at eavesdropping by Eve destroys these correlations in a way that Alice and Bob can detect. Similarly toBB84, the protocol involves a private measurement protocol before detecting the presence of Eve. The measurement stage involves Alice measuring each photon she receives using some basis from the setZ0,Zπ8,Zπ4{\displaystyle Z_{0},Z_{\frac {\pi }{8}},Z_{\frac {\pi }{4}}}while Bob chooses fromZ0,Zπ8,Z−π8{\displaystyle Z_{0},Z_{\frac {\pi }{8}},Z_{-{\frac {\pi }{8}}}}whereZθ{\displaystyle Z_{\theta }}is the{|↑⟩,|→⟩}{\displaystyle \{|{\uparrow }\rangle ,\;|{\rightarrow }\rangle \}}basis rotated byθ{\displaystyle \theta }. They keep their series of basis choices private until measurements are completed. Two groups of photons are made: the first consists of photons measured using the same basis by Alice and Bob while the second contains all other photons. To detect eavesdropping, they can compute the test statisticS{\displaystyle S}using the correlation coefficients between Alice's bases and Bob's similar to that shown in theBell test experiments. Maximally entangled photons would result in|S|=22{\displaystyle |S|=2{\sqrt {2}}}. If this were not the case, then Alice and Bob can conclude Eve has introduced local realism to the system, violatingBell's theorem. If the protocol is successful, the first group can be used to generate keys since those photons are completely anti-aligned between Alice and Bob. In traditional QKD, the quantum devices used must be perfectly calibrated, trustworthy, and working exactly as they are expected to.[6]Deviations from expected measurements can be extremely hard to detect, which leaves the entire system vulnerable. A new protocol called device independent QKD (DIQKD) or measurementdevice independent QKD(MDIQKD) allows for the use of uncharacterized or untrusted devices, and for deviations from expected measurements to be included in the overall system.[6][7]These deviations will cause the protocol to abort when detected, rather than resulting in incorrect data.[6] DIQKD was first proposed by Mayers and Yao,[8]building off of the BB84 protocol. They presented that in DIQKD, the quantum device, which they refer to as the photon source, be manufactured to come with tests that can be run by Alice and Bob to "self-check" if their device is working properly. Such a test would only need to consider the classical inputs and outputs in order to determine how much information is at risk of being intercepted by Eve. A self checking, or "ideal" source would not have to be characterized,[7][9]and would therefore not be susceptible to implementation flaws.[7] Recent research has proposed using a Bell test to check that a device is working properly.[6]Bell's theorem ensures that a device can create two outcomes that are exclusively correlated, meaning that Eve could not intercept the results, without making any assumptions about said device. This requires highly entangled states, and a low quantum bit error rate.[7]DIQKD presents difficulties in creating qubits that are in such high quality entangled states, which makes it a challenge to realize experimentally.[6] Twin fields quantum key distribution (TFQKD) was introduced in 2018, and is a version of DIQKD designed to overcome the fundamental rate-distance limit of traditional quantum key distribution.[10]The rate-distance limit, also known as the rate-loss trade off, describes how as distance increases between Alice and Bob, the rate of key generation decreases exponentially.[11]In traditional QKD protocols, this decay has been eliminated via the addition of physically secured relay nodes, which can be placed along the quantum link with the intention of dividing it up into several low-loss sections. Researchers have also recommended the use of quantum repeaters, which when added to the relay nodes make it so that they no longer need to be physically secured.[11]Quantum repeaters, however, are difficult to create and have yet to be implemented on a useful scale.[10]TFQKD aims to bypass the rate-distance limit without the use of quantum repeaters or relay nodes, creating manageable levels of noise and a process that can be repeated much more easily with today's existing technology.[10] The original protocol for TFQKD is as follows: Alice and Bob each have a light source and one arm on an interferometer in their laboratories. The light sources create two dim optical pulses with a randomly phasepaorpbin the interval[0, 2π)and an encoding phaseγaorγb. The pulses are sent along a quantum to Charlie, a third party who can be malicious or not. Charlie uses a beam splitter to overlap the two pulses and perform a measurement. He has two detectors in his own lab, one of which will light up if the bits are equal (00) or (11), and the other when they are different (10, 01). Charlie will announce to Alice and Bob which of the detectors lit up, at which point they publicly reveal the phasespandγ.[10]This is different from traditional QKD, in which the phases used are never revealed.[12] The quantum key distribution protocols described above provide Alice and Bob with nearly identical shared keys, and also with an estimate of the discrepancy between the keys. These differences can be caused by eavesdropping, but also by imperfections in the transmission line and detectors. As it is impossible to distinguish between these two types of errors, guaranteed security requires the assumption that all errors are due to eavesdropping. Provided the error rate between the keys is lower than a certain threshold (27.6% as of 2002[13]), two steps can be performed to first remove the erroneous bits and then reduce Eve's knowledge of the key to an arbitrary small value. These two steps are known asinformation reconciliationandprivacy amplificationrespectively, and were first described in 1988.[14] Information reconciliationis a form of error correction carried out between Alice and Bob's keys, in order to ensure both keys are identical. It is conducted over the public channel and as such it is vital to minimise the information sent about each key, as this can be read by Eve. A common protocol used for information reconciliation is thecascade protocol, proposed in 1994.[15]This operates in several rounds, with both keys divided into blocks in each round and theparityof those blocks compared. If a difference in parity is found then abinary searchis performed to find and correct the error. If an error is found in a block from a previous round that had correct parity then another error must be contained in that block; this error is found and corrected as before. This process is repeated recursively, which is the source of the cascade name. After all blocks have been compared, Alice and Bob both reorder their keys in the same random way, and a new round begins. At the end of multiple rounds Alice and Bob have identical keys with high probability; however, Eve has additional information about the key from the parity information exchanged. However, from acoding theorypoint of view information reconciliation is essentially source coding with side information. In consequence any coding scheme that works for this problem can be used for information reconciliation. Lately turbocodes,[16]LDPC codes[17]and polar codes[18]have been used for this purpose improving the efficiency of the cascade protocol. Privacy amplificationis a method for reducing (and effectively eliminating) Eve's partial information about Alice and Bob's key. This partial information could have been gained both by eavesdropping on the quantum channel during key transmission (thus introducing detectable errors), and on the public channel during information reconciliation (where it is assumed Eve gains all possible parity information). Privacy amplification uses Alice and Bob's key to produce a new, shorter key, in such a way that Eve has only negligible information about the new key. This is performed using arandomness extractor, for example, by applying auniversal hash function, chosen at random from a publicly known set of such functions, which takes as its input a binary string of length equal to the key and outputs a binary string of a chosen shorter length. The amount by which this new key is shortened is calculated, based on how much information Eve could have gained about the old key (which is known due to the errors this would introduce), in order to reduce the probability of Eve having any knowledge of the new key to a very low value. In 1991,John Rarity,Paul TapsterandArtur Ekert, researchers from the UK Defence Research Agency in Malvern and Oxford University, demonstrated quantum key distribution protected by the violation of the Bell inequalities. In 2008, exchange of secure keys at 1 Mbit/s (over 20 km of optical fibre) and 10 kbit/s (over 100 km of fibre), was achieved by a collaboration between theUniversity of CambridgeandToshibausing the BB84 protocol withdecoy statepulses.[19] In 2007,Los Alamos National Laboratory/NISTachieved quantum key distribution over a 148.7 km of optic fibre using the BB84 protocol.[20]Significantly, this distance is long enough for almost all the spans found in today's fibre networks. A European collaboration achieved free space QKD over 144 km between two of theCanary Islandsusing entangled photons (the Ekert scheme) in 2006,[21]and using BB84 enhanced withdecoy states[22][23][24][25][26]in 2007.[27] As of August 2015[update]the longest distance for optical fiber (307 km)[28]was achieved byUniversity of GenevaandCorning Inc.In the same experiment, a secret key rate of 12.7 kbit/s was generated, making it the highest bit rate system over distances of 100 km. In 2016 a team from Corning and various institutions in China achieved a distance of 404 km, but at a bit rate too slow to be practical.[29] In June 2017, physicists led byThomas Jenneweinat theInstitute for Quantum Computingand theUniversity of WaterlooinWaterloo, Canadaachieved the first demonstration of quantum key distribution from a ground transmitter to a moving aircraft. They reported optical links with distances between 3–10 km and generated secure keys up to 868 kilobytes in length.[30] Also in June 2017, as part of theQuantum Experiments at Space Scaleproject, Chinese physicists led byPan Jianweiat theUniversity of Science and Technology of Chinameasured entangled photons over a distance of 1203 km between two ground stations, laying the groundwork for future intercontinental quantum key distribution experiments.[31]Photons were sent from one ground station to the satellite they had namedMiciusand back down to another ground station, where they "observed a survival of two-photon entanglement and a violation of Bell inequality by 2.37 ± 0.09 under strict Einstein locality conditions" along a "summed length varying from 1600 to 2400 kilometers."[32]Later that year BB84 was successfully implemented over satellite links fromMiciusto ground stations in China and Austria. The keys were combined and the result was used to transmit images and video between Beijing, China, and Vienna, Austria.[33] In August 2017, a group at Shanghai Jiaotong University experimentally demonstrate that polarization quantum states including general qubits of single photon and entangled states can survive well after travelling through seawater,[34]representing the first step towards underwater quantum communication. In May 2019 a group led by Hong Guo at Peking University and Beijing University of Posts and Telecommunications reported field tests of a continuous-variable QKD system through commercial fiber networks in Xi'an and Guangzhou over distances of 30.02 km (12.48 dB) and 49.85 km (11.62 dB) respectively.[35] In December 2020, IndianDefence Research and Development Organisationtested a QKD between two of its laboratories in Hyderabad facility. The setup also demonstrated the validation of detection of a third party trying to gain knowledge of the communication. Quantum based security against eavesdropping was validated for the deployed system at over 12 km (7.5 mi) range and 10 dB attenuation over fibre optic channel. Acontinuous wavelaser source was used to generate photons without depolarization effect and timing accuracy employed in the setup was of the order of picoseconds. TheSingle photon avalanche detector(SPAD) recorded arrival of photons and key rate was achieved in the range of kbps with low Quantum bit error rate.[36] In March 2021,Indian Space Research Organisationalso demonstrated a free-space Quantum Communication over a distance of 300 meters. A free-space QKD was demonstrated atSpace Applications Centre(SAC), Ahmedabad, between two line-of-sight buildings within the campus for video conferencing by quantum-key encrypted signals. The experiment utilised aNAVICreceiver for time synchronization between the transmitter and receiver modules. Later in January 2022, Indian scientists were able to successfully create an atmospheric channel for exchange of crypted messages and images. After demonstrating quantum communication between two ground stations, India has plans to develop Satellite Based Quantum Communication (SBQC).[37][38] In July 2022, researchers published their work experimentally implementing a device-independent quantum key distribution (DIQKD) protocol that uses quantum entanglement (as suggested by Ekert)[5]to insure resistance to quantum hacking attacks.[6]They were able to create two ions, about two meters apart that were in a high quality entangled state using the following process: Alice and Bob each have ion trap nodes with an88Sr+qubit inside. Initially, they excite the ions to an electronic state, which creates an entangled state. This process also creates two photons, which are then captured and transported using an optical fiber, at which point a Bell-basis measurement is performed and the ions are projected to a highly entangled state. Finally the qubits are returned to new locations in the ion traps disconnected from the optical link so that no information can be leaked. This is repeated many times before the key distribution proceeds.[6] A separate experiment published in July 2022 demonstrated implementation of DIQKD that also uses a Bell inequality test to ensure that the quantum device is functioning, this time at a much larger distance of about 400m, using an optical fiber 700m long.[7]The set up for the experiment was similar to the one in the paragraph above, with some key differences. Entanglement was generated in a quantum network link (QNL) between two87Rb atoms in separate laboratories located 400m apart, connected by the 700m channel. The atoms are entangled by electronic excitation, at which point two photons are generated and collected, to be sent to the bell state measurement (BSM) setup. The photons are projected onto a |ψ+state, indicating maximum entanglement. The rest of the key exchange protocol used is similar to the original QKD protocol, with the only difference being that keys are generated with two measurement settings instead of one.[7] Since the proposal of Twin Field Quantum Key Distribution in 2018, a myriad of experiments have been performed with the goal of increasing the distance in a QKD system. The most successful of which was able to distribute key information across a distance of 833.8 km.[12] In 2023, scientists at Indian Institute of Technology (IIT) Delhi have achieved a trusted-node-free quantum key distribution (QKD) up to380 kmin standard telecom fiber with a very low quantum bit error rate (QBER).[39] In 2024 scientists in South Africa and China achieved quantum key distribution in the atmosphere with a record breaking distance of 12,900 km, using lasers and amicrosatelliteinlow Earth orbit. They transferred over a million quantum-secure bits between South Africa and China during one orbit of the satellite.[40] Many companies around the world offer commercial quantum key distribution, for example:ID Quantique(Geneva),Toshiba,[41]MagiQ Technologies, Inc.(New York), QNu Labs (Bengaluru,India),QuintessenceLabs(Australia),QRate(Russia), SeQureNet (Paris), Quantum Optics Jena (Germany) andKEEQuant(Germany). Several other companies also have active research programs, includingKETS Quantum Security(UK),HP,IBM,Mitsubishi,NECandNTT(SeeExternal linksfor direct research links). In 2004, the world's first bank transfer using quantum key distribution was carried out inVienna,Austria.[42]Quantum encryption technology provided by the Swiss companyId Quantiquewas used in the Swiss canton (state) of Geneva to transmit ballot results to the capital in the national election occurring on 21 October 2007.[43]In 2013,Battelle Memorial Instituteinstalled a QKD system built by ID Quantique between their main campus in Columbus, Ohio and their manufacturing facility in nearby Dublin.[44]Field tests of Tokyo QKD network have been underway for some time.[45] TheDARPA Quantum Network,[46]was a 10-node quantum key distribution network, which ran continuously for four years, 24 hours a day, from 2004 to 2007 in Massachusetts in the United States. It was developed byBBN Technologies,Harvard University,Boston University, with collaboration fromIBM Research, theNational Institute of Standards and Technology, andQinetiQ. It supported a standards-based Internetcomputer networkprotected by quantum key distribution. The world's firstcomputer networkprotected by quantum key distribution was implemented in October 2008, at a scientific conference in Vienna. The name of this network isSECOQC(SecureCommunication Based onQuantumCryptography) and theEUfunded this project. The network used 200 km of standardfibre-optic cableto interconnect six locations across Vienna and the town ofSt Poeltenlocated 69 km to the west.[47] Id Quantique has successfully completed the longest running project for testing Quantum Key Distribution (QKD) in a field environment. The main goal of the SwissQuantum network project installed in the Geneva metropolitan area in March 2009, was to validate the reliability and robustness of QKD in continuous operation over a long time period in a field environment. The quantum layer operated for nearly 2 years until the project was shut down in January 2011 shortly after the initially planned duration of the test. In May 2009, a hierarchical quantum network was demonstrated inWuhu,China. The hierarchical network consisted of a backbone network of four nodes connecting a number of subnets. The backbone nodes were connected through an optical switching quantum router. Nodes within each subnet were also connected through an optical switch, which were connected to the backbone network through a trusted relay.[48] Launched in August 2016, theQUESSspace mission created an international QKD channel between China and theInstitute for Quantum Optics and Quantum InformationinVienna,Austria− a ground distance of 7,500 km (4,700 mi), enabling the first intercontinental secure quantum video call.[49][50][51]By October 2017, a 2,000-km fiber line was operational betweenBeijing,Jinan,HefeiandShanghai.[52]Together they constitute the world's first space-ground quantum network.[53]Up to 10 Micius/QUESS satellites are expected,[54]allowing a European–Asianquantum-encrypted networkby 2020, and a global network by 2030.[55][56] The Tokyo QKD Network[57]was inaugurated on the first day of the UQCC2010 conference. The network involves an international collaboration between 7 partners;NEC,Mitsubishi Electric,NTTandNICTfrom Japan, and participation from Europe by Toshiba Research Europe Ltd. (UK), Id Quantique (Switzerland) and All Vienna (Austria). "All Vienna" is represented by researchers from theAustrian Institute of Technology(AIT), theInstitute for Quantum Optics and Quantum Information(IQOQI) and theUniversity of Vienna. A hub-and-spoke network has been operated by Los Alamos National Laboratory since 2011. All messages are routed via the hub. The system equips each node in the network with quantum transmitters—i.e., lasers—but not with expensive and bulky photon detectors. Only the hub receives quantum messages. To communicate, each node sends a one-time pad to the hub, which it then uses to communicate securely over a classical link. The hub can route this message to another node using another one time pad from the second node. The entire network is secure only if the central hub is secure. Individual nodes require little more than a laser: Prototype nodes are around the size of a box of matches.[58] Following the successfulNational Quantum-Safe NetworkTestbed trials, National Quantum-Safe Network Plus (NQSN+) was launched by IMDA in 2023 and is part of Singapore's Digital Connectivity Blueprint, which outlines the next bound of Singapore's digital connectivity to 2030. NQSN+ will support network operators to deploy quantum-safe networks nationwide, granting businesses easy access to quantum-safe solutions that safeguard their critical data. The NQSN+ will start with two network operators, Singtel and SPTel, together with SpeQtral. Each will build a nationwide, interoperable quantum-safe network that can serve all businesses. Businesses can work with NQSN+ operators to integrate quantum-safe solutions such as Quantum Key Distribution (QKD) and Post-Quantum Cryptography (PQC) and be secure in the quantum age.[59] In late 2025 or 2026, theESAplans to launch the satellite Eagle-1, an experimental space-based quantum key distribution system.[60] The simplest type of possible attack is the intercept-resend attack, where Eve measures the quantum states (photons) sent by Alice and then sends replacement states to Bob, prepared in the state she measures. In the BB84 protocol, this produces errors in the key Alice and Bob share. As Eve has no knowledge of the basis a state sent by Alice is encoded in, she can only guess which basis to measure in, in the same way as Bob. If she chooses correctly, she measures the correct photon polarization state as sent by Alice, and resends the correct state to Bob. However, if she chooses incorrectly, the state she measures is random, and the state sent to Bob cannot be the same as the state sent by Alice. If Bob then measures this state in the same basis Alice sent, he too gets a random result—as Eve has sent him a state in the opposite basis—with a 50% chance of an erroneous result (instead of the correct result he would get without the presence of Eve). The table below shows an example of this type of attack. The probability Eve chooses the incorrect basis is 50% (assuming Alice chooses randomly), and if Bob measures this intercepted photon in the basis Alice sent he gets a random result, i.e., an incorrect result with probability of 50%. The probability an intercepted photon generates an error in the key string is then 50% × 50% = 25%. If Alice and Bob publicly comparen{\displaystyle n}of their key bits (thus discarding them as key bits, as they are no longer secret) the probability they find disagreement and identify the presence of Eve is So to detect an eavesdropper with probabilityPd=0.999999999{\displaystyle P_{d}=0.999999999}Alice and Bob need to comparen=72{\displaystyle n=72}key bits. Quantum key distribution is vulnerable to aman-in-the-middle attackwhen used without authentication to the same extent as any classical protocol, since no known principle of quantum mechanics can distinguish friend from foe. As in the classical case, Alice and Bob cannot authenticate each other and establish a secure connection without some means of verifying each other's identities (such as an initial shared secret). If Alice and Bob have an initial shared secret then they can use an unconditionally secure authentication scheme (such asCarter-Wegman,[61]) along with quantum key distribution to exponentially expand this key, using a small amount of the new key to authenticate the next session.[62]Several methods to create this initial shared secret have been proposed, for example using a 3rd party[63]or chaos theory.[64]Nevertheless, only "almost strongly universal" family of hash functions can be used for unconditionally secure authentication.[65] In the BB84 protocol Alice sends quantum states to Bob using single photons. In practice many implementations use laser pulses attenuated to a very low level to send the quantum states. These laser pulses contain a very small number of photons, for example 0.2 photons per pulse, which are distributed according to aPoisson distribution. This means most pulses actually contain no photons (no pulse is sent), some pulses contain 1 photon (which is desired) and a few pulses contain 2 or more photons. If the pulse contains more than one photon, then Eve can split off the extra photons and transmit the remaining single photon to Bob. This is the basis of the photon number splitting attack,[66]where Eve stores these extra photons in a quantum memory until Bob detects the remaining single photon and Alice reveals the encoding basis. Eve can then measure her photons in the correct basis and obtain information on the key without introducing detectable errors. Even with the possibility of a PNS attack a secure key can still be generated, as shown in the GLLP security proof;[67]however, a much higher amount of privacy amplification is needed reducing the secure key rate significantly (with PNS the rate scales ast2{\displaystyle t^{2}}as compared tot{\displaystyle t}for a single photon sources, wheret{\displaystyle t}is the transmittance of the quantum channel). There are several solutions to this problem. The most obvious is to use a true single photon source instead of an attenuated laser. While such sources are still at a developmental stage QKD has been carried out successfully with them.[68]However, as current sources operate at a low efficiency and frequency key rates and transmission distances are limited. Another solution is to modify the BB84 protocol, as is done for example in theSARG04protocol,[69]in which the secure key rate scales ast3/2{\displaystyle t^{3/2}}. The most promising solution is thedecoy states[22][23][24][25][26]in which Alice randomly sends some of her laser pulses with a lower average photon number. These decoy states can be used to detect a PNS attack, as Eve has no way to tell which pulses are signal and which decoy. Using this idea the secure key rate scales ast{\displaystyle t}, the same as for a single photon source. This idea has been implemented successfully first at the University of Toronto,[70][71]and in several follow-up QKD experiments,[72]allowing for high key rates secure against all known attacks. Because currently a dedicated fibre optic line (or line of sight in free space) is required between the two points linked by quantum key distribution, adenial of service attackcan be mounted by simply cutting or blocking the line. This is one of the motivations for the development of quantum key distribution networks, which would route communication via alternate links in case of disruption. A quantum key distribution system may be probed by Eve by sending bright light into the quantum channel and analyzing the back-reflections in a Trojan-horse attack. In a recent research study it has been shown that Eve discerns Bob's secret basis choice with higher than 90% probability, breaching the security of the system.[73] If Eve is assumed to have unlimited resources, for example both classical and quantum computing power, there are many more attacks possible. BB84 has been proven secure against any attacks allowed by quantum mechanics, both for sending information using an ideal photon source which only ever emits a single photon at a time,[74]and also using practical photon sources which sometimes emit multiphoton pulses.[67]These proofs are unconditionally secure in the sense that no conditions are imposed on the resources available to the eavesdropper; however, there are other conditions required: Hacking attacks target vulnerabilities in the operation of a QKD protocol or deficiencies in the components of the physical devices used in construction of the QKD system. If the equipment used in quantum key distribution can be tampered with, it could be made to generate keys that were not secure using arandom number generator attack. Another common class of attacks is theTrojan horseattack[75]which does not require physical access to the endpoints: rather than attempt to read Alice and Bob's single photons, Eve sends a large pulse of light back to Alice in between transmitted photons. Alice's equipment reflects some of Eve's light, revealing the state of Alice's basis (e.g., a polarizer). This attack can be detected, e.g. by using a classical detector to check the non-legitimate signals (i.e. light from Eve) entering Alice's system. It is also conjectured[by whom?]that most hacking attacks can similarly be defeated by modifying the implementation, though there is no formal proof. Several other attacks including faked-state attacks,[76]phase remapping attacks,[77]and time-shift attacks[78]are now known. The time-shift attack has even been demonstrated on a commercial quantum cryptosystem.[79]This is the first demonstration of quantum hacking against a non-homemade quantum key distribution system. Later on, the phase-remapping attack was also demonstrated on a specially configured, research oriented open QKD system (made and provided by the Swiss company Id Quantique under their Quantum Hacking program).[80]It is one of the first 'intercept-and-resend' attacks on top of a widely used QKD implementation in commercial QKD systems. This work has been widely reported in media.[81][82][83][84] The first attack that claimed to be able to eavesdrop the whole key[85]without leaving any trace was demonstrated in 2010. It was experimentally shown that the single-photon detectors in two commercial devices could be fully remote-controlled using specially tailored bright illumination. In a spree of publications[86][87][88]thereafter, the collaboration between theNorwegian University of Science and Technologyin Norway andMax Planck Institute for the Science of Lightin Germany, has now demonstrated several methods to successfully eavesdrop on commercial QKD systems based on weaknesses ofavalanche photodiodes(APDs) operating in gated mode. This has sparked research on new approaches to securing communications networks.[89] The task of distributing a secret key could be achieved even when the particle (on which the secret information, e.g. polarization, has been encoded) does not traverse through the quantum channel using a protocol developed by Tae-Gon Noh.[90]Here Alice generates a photon which, by not taking a measurement until later, exists in a superposition of being in paths (a) and (b) simultaneously. Path (a) stays inside Alice's secure device and path (b) goes to Bob. By rejecting the photons that Bob receives and only accepting the ones he doesn't receive, Bob & Alice can set up a secure channel, i.e. Eve's attempts to read thecounterfactualphotons would still be detected. This protocol uses the quantum phenomenon whereby the possibility that a photon can be sent has an effect even when it is not sent. So-calledinteraction-free measurementalso uses this quantum effect, as for example in thebomb testing problem, whereby an experimenter can conceptually determine which bombs are not duds without setting them off, except in acounterfactualsense. Quantum cryptography was proposed first byStephen Wiesner, then at Columbia University in New York, who, in the early 1970s, introduced the concept of quantum conjugate coding. His seminal paper titled "Conjugate Coding" was rejected by IEEE Information Theory but was eventually published in 1983 in SIGACT News (15:1 pp. 78–88, 1983). In this paper he showed how to store or transmit two messages by encoding them in two "conjugate observables", such as linear and circular polarization of light, so that either, but not both, of which may be received and decoded. He illustrated his idea with a design of unforgeable bank notes. A decade later, building upon this work,Charles H. Bennett, of the IBMThomas J. Watson Research Center, andGilles Brassard, of theUniversity of Montreal, proposed a method for secure communication based on Wiesner's "conjugate observables".[91]In 1990, Artur Ekert, then a PhD student atWolfson College, University of Oxford, developed a different approach to quantum key distribution based on quantum entanglement. The current commercial systems are aimed mainly at governments and corporations with high security requirements. Key distribution by courier is typically used in such cases, where traditional key distribution schemes are not believed to offer enough guarantee. This has the advantage of not being intrinsically distance limited, and despite long travel times the transfer rate can be high due to the availability of large capacity portable storage devices. The major difference of quantum key distribution is the ability to detect any interception of the key, whereas with courier the key security cannot be proven or tested. QKD (quantum key distribution) systems also have the advantage of being automatic, with greater reliability and lower operating costs than a secure human courier network. Kak's three-stage protocol has been proposed as a method for secure communication that is entirely quantum unlike quantum key distribution in which the cryptographic transformation uses classical algorithms.[92] Factors preventing wide adoption of quantum key distribution outside high security areas include the cost of equipment, and the lack of a demonstrated threat to existing key exchange protocols. However, with optic fibre networks already present in many countries the infrastructure is in place for a more widespread use. An Industry Specification Group (ISG) of the European Telecommunications Standards Institute (ETSI) has been set up to address standardisation issues in quantum cryptography.[93] European Metrology Institutes, in the context of dedicated projects,[94][95]are developing measurements required to characterise components of QKD systems. Toshiba Europe has been awarded a prestigiousInstitute of PhysicsAward for Business Innovation. This recognises Toshiba's pioneering QKD[96]technology developed over two decades of research, protecting communication infrastructure from present and future cyber-threats, and commercialising UK-manufactured products which pave the road to the quantum internet. Toshiba also took the Semi Grand Prix award in the Solutions Category for the QKD has won the Minister of Economy, Trade and Industry Award inCEATEC AWARD2021, the prestigious awards presented at CEATEC, Japan's premier electronics industry trade show.[97] Because of the practical problems with quantum key distribution, some governmental organizations recommend the use of post-quantum cryptography (quantum resistant cryptography) instead. For example, the USNational Security Agency,[98]European Union Agency for Cybersecurityof EU (ENISA),[99]UK'sNational Cyber Security Centre,[100]French Secretariat for Defense and Security (ANSSI),[101]and German Federal Office for Information Security (BSI)[102]recommend post-quantum cryptography. For example, the US National Security Agency addresses five issues:[98] In response to problem 1 above, attempts to deliver authentication keys using post-quantum cryptography (or quantum-resistant cryptography) have been proposed worldwide. On the other hand, quantum-resistant cryptography is cryptography belonging to the class of computational security. In 2015, a research result was already published that "sufficient care must be taken in implementation to achieve information-theoretic security for the system as a whole when authentication keys that are not information-theoretic secure are used" (if the authentication key is not information-theoretically secure, an attacker can break it to bring all classical and quantum communications under control and relay them to launch aman-in-the-middle attack).[104]Ericsson, a private company, also cites and points out the above problems and then presents a report that it may not be able to support thezero trust security model, which is a recent trend in network security technology.[105]
https://en.wikipedia.org/wiki/Quantum_key_distribution
Post-Quantum Cryptography Standardization[1]is a program and competition byNISTto update their standards to includepost-quantum cryptography.[2]It was announced at PQCrypto 2016.[3]23 signature schemes and 59 encryption/KEMschemes were submitted by the initial submission deadline at the end of 2017[4]of which 69 total were deemed complete and proper and participated in the first round. Seven of these, of which 3 are signature schemes, have advanced to the third round, which was announced on July 22, 2020.[citation needed] On August 13, 2024, NIST released final versions of the first three Post Quantum Crypto Standards: FIPS 203, FIPS 204, and FIPS 205.[5] Academic research on the potential impact of quantum computing dates back to at least 2001.[6]A NIST published report from April 2016 cites experts that acknowledge the possibility of quantum technology to render the commonly usedRSAalgorithm insecure by 2030.[7]As a result, a need to standardizequantum-securecryptographic primitives was pursued. Since most symmetric primitives are relatively easy to modify in a way that makes them quantum resistant, efforts have focused on public-key cryptography, namelydigital signaturesandkey encapsulation mechanisms. In December 2016 NIST initiated a standardization process by announcing a call for proposals.[8] The competition is now in its third round out of expected four, where in each round some algorithms are discarded and others are studied more closely. NIST hopes to publish the standardization documents by 2024, but may speed up the process if major breakthroughs inquantum computingare made. It is currently undecided whether the future standards will be published asFIPSor as NIST Special Publication (SP). Under consideration were:[9](strikethroughmeans it had been withdrawn) Candidates moving on to the second round were announced on January 30, 2019. They are:[33] On July 22, 2020, NIST announced seven finalists ("first track"), as well as eight alternate algorithms ("second track"). The first track contains the algorithms which appear to have the most promise, and will be considered for standardization at the end of the third round. Algorithms in the second track could still become part of the standard, after the third round ends.[53]NIST expects some of the alternate candidates to be considered in a fourth round. NIST also suggests it may re-open the signature category for new schemes proposals in the future.[54] On June 7–9, 2021, NIST conducted the third PQC standardization conference, virtually.[55]The conference included candidates' updates and discussions on implementations, on performances, and on security issues of the candidates. A small amount of focus was spent on intellectual property concerns. AfterNIST's announcement regarding the finalists and the alternate candidates, various intellectual property concerns were voiced, notably surroundinglattice-based schemessuch asKyberandNewHope. NIST holds signed statements from submitting groups clearing any legal claims, but there is still a concern that third parties could raise claims. NIST claims that they will take such considerations into account while picking the winning algorithms.[56] During this round, some candidates have shown to be vulnerable to some attack vectors. It forces these candidates to adapt accordingly: On July 5, 2022, NIST announced the first group of winners from its six-year competition.[60][61] On July 5, 2022, NIST announced four candidates for PQC Standardization Round 4.[62] On August 13, 2024, NIST released final versions of its first three Post Quantum Crypto Standards.[5]According to the release announcement: While there have been no substantive changes made to the standards since the draft versions, NIST has changed the algorithms’ names to specify the versions that appear in the three finalized standards, which are: On March 11, 2025 NIST released HQC as the fifth algorithm for post-quantum asymmetric encryption as used for key encapsulation / exchange.[66]The new algorithm is as a backup for ML-KEM, the main algorithm for general encryption. HQC is based on different math than ML-KEM, thus mitigating weakness if found.[67]The draft standard incorporating the HQC algorithm is expected in early 2026 with the final in 2027. NIST received 50 submissions and deemed 40 to be complete and proper according to the submission requirements.[68]Under consideration are:[69](strikethroughmeans it has been withdrawn) NIST deemed 14 submissions to pass to the second round.[127]
https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography_Standardization
Crypto-shreddingor crypto erase (cryptographic erasure) is the practice of rendering encrypteddataunusable by deliberately deleting or overwriting theencryptionkeys: assuming the key is not later recovered and the encryption is not broken, the data should become irrecoverable, effectively permanently deleted or "shredded".[1]This requires that the data have been encrypted. Data may be considered to exist in three states:data at rest,data in transitanddata in use. General data security principles, such as in theCIA triadofconfidentiality,integrity, andavailability, require that all three states must be adequately protected. Deleting data at rest on storage media such asbackuptapes,data stored in the cloud,computers, phones, ormulti-function printerscan present challenges when confidentiality of information is of concern. When encryption is in place, data disposal is more secure, as less data (only the key material) needs to be destroyed. There are various reasons for using crypto-shredding, including when the data is contained in defective or out-of date systems, there is no further use for the data, the circumstances are such that there are no [longer] legal rights to use or retain the data, and other similar motivations. Legal obligations may also come from regulations such as theright to be forgotten, theGeneral Data Protection Regulation, and others. Data security is largely influenced byconfidentialityandprivacyconcerns. In some cases all data storage is encrypted, such as encrypting entireharddisks,computer files, ordatabases. Alternatively only specific data may be encrypted, such aspassportnumbers,social security numbers,bank account numbers,person names, orrecord in a databases. Additionally, data in one system may be encrypted with separate keys when that same data is contained in multiple systems. Whenspecific pieces of data are encrypted(possibly with different keys) it allows for more specific data shredding. There is no need to have access to the data (like an encrypted backup tape), only the encryption keys need to be shredded.[2] iOSdevices andMacintoshcomputers with anApple siliconchip use crypto-shredding when performing the "Erase all content and settings" action by discarding all the keys in 'effaceablestorage'. This renders all user data on the device cryptographically inaccessible, in a very short amount of time.[3] There are many security issues that should be considered when securing data. Some examples are listed in this section. The security issues listed here are not specific to crypto-shredding, and in general these may apply to all types of data encryption. In addition to crypto-shredding,data erasure,degaussingandphysically shreddingthe physical device (disk) can mitigate the risk further.
https://en.wikipedia.org/wiki/Crypto-shredding
Bernstein v. United Stateswas a series of court cases filed byDaniel J. Bernstein, then a mathematics Ph.D. student at theUniversity of California, Berkeley, challenging U.S. government restrictions on theexport of cryptographic software. In the early 1990s, the U.S. government classified encryption software as a "munition," imposing strict export controls. As a result, Bernstein was required to register as an arms dealer and obtain an export license before he could publish his encryption software online. With the support of theElectronic Frontier Foundation (EFF), Bernstein filed a lawsuit against the U.S. government, arguing that the export controls violated hisFirst Amendmentrights. The case ultimately led to a relaxation of export restrictions on cryptography, which facilitated the development of secure international e-commerce. The decision has been recognized by First Amendment and technology advocacy groups for affirming a "right to code" and applying First Amendment protections to code as aform of expression.[1][2] The case was first brought in 1995, when Bernstein was a student atUniversity of California, Berkeley, and wanted to publish a paper and associatedsource codeon hisSnuffleencryption system. Bernstein was represented by theElectronic Frontier Foundation, who hired outside lawyerCindy Cohnand also obtainedpro bono publicoassistance from Lee Tien of Berkeley; M. Edward Ross of the San Francisco law firm of Steefel, Levitt & Weiss; James Wheaton and Elizabeth Pritzker of the First Amendment Project in Oakland; and Robert Corn-Revere, Julia Kogan, and Jeremy Miller of the Washington, DC, law firm of Hogan & Hartson. After four years and one regulatory change, theNinth Circuit Court of Appealsruled thatsoftwaresource codewas speech protected by theFirst Amendmentand that the government's regulations preventing its publication were unconstitutional.[3]Regarding those regulations, theEFFstates: Years before, the government had placed encryption, a method for scrambling messages so they can only be understood by their intended recipients, on theUnited States Munitions List, alongside bombs andflamethrowers, as a weapon to be regulated for national security purposes. Companies and individuals exporting items on the munitions list, including software with encryption capabilities, had to obtain priorState Departmentapproval. The government requesteden bancreview.[5]InBernstein v. U.S. Dept. of Justice, 192 F.3d 1308 (9th Cir. 1999), the Ninth Circuit ordered that this case be reheard by theen banccourt, and withdrew the three-judge panel opinion,Bernstein v. U.S. Dept. of Justice, 176 F.3d 1132 (9th Cir. 1999).[6] The government modified the regulations again[when?], substantially loosening them, and Bernstein, then a professor at theUniversity of Illinois at Chicago, challenged them again. This time, he chose to represent himself, although he had no formal legal training. On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it and asked Bernstein to come back when the government made a "concrete threat".[7] Apple citedBernstein v. USin itsrefusal to hack the San Bernardino shooter's iPhone, saying that they could not be compelled to "speak" (write code).[8] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Bernstein_v._United_States
Email encryptionisencryptionofemailmessages to protect the content from being read by entities other than the intended recipients. Email encryption may also includeauthentication. Email is prone to the disclosure of information. Although many emails are encrypted during transmission, they are frequently stored in plaintext, potentially exposing them to unauthorized access by third parties, including email service providers.[1]By default, popular email services such asGmailand Outlook do not enableend-to-end encryption.[2]Utilizing certain available tools, unauthorized individuals may access and read the email content.[3] Email encryption can rely onpublic-key cryptography, in which users can each publish apublic keythat others can use to encrypt messages to them, while keeping secret aprivatekey they can use to decrypt such messages or to digitally encrypt and sign messages they send. With the original design ofemail protocol, the communication between email servers was inplain text, which posed a hugesecurityrisk. Over the years, various mechanisms have been proposed to encrypt the communication between email servers. Encryption may occur at the transport level (aka "hop by hop") or end-to-end.Transport layer encryptionis often easier to set up and use; end-to-end encryption provides stronger defenses, but can be more difficult to set up and use. One of the most commonly used email encryption extensions isSTARTTLS. It is aTLS (SSL)layer over the plaintext communication, allowing email servers to upgrade theirplaintextcommunication to encrypted communication. Assuming that the email servers on both the sender and the recipient side support encrypted communication,An eavesdropper monitoring the communication between mail servers cannot use packet sniffing tools to view the email contents. Similar STARTTLS extensions exist for the communication between an email client and the email server (seeIMAP4andPOP3, as stated by RFC 2595). STARTTLS may be used regardless of whether the email's contents are encrypted using another protocol. The encrypted message is revealed, and can be altered by, intermediate email relays. In other words, the encryption takes place between individualSMTPrelays, not between the sender and the recipient. This has both good and bad consequences. A key positive trait of transport layer encryption is that users do not need to do or change anything; the encryption automatically occurs when they send email. In addition, since receiving organizations can decrypt the email without cooperation of the end user, receiving organizations can runvirusscanners and spam filters before delivering the email to the recipient. However, it also means that the receiving organization and anyone who breaks into that organization's email system (unless further steps are taken) can easily read or modify the email. If the receiving organization is considered a threat, then end-to-end encryption is necessary. TheElectronic Frontier Foundationencourages the use of STARTTLS, and has launched the 'STARTTLS Everywhere' initiative to "make it simple and easy for everyone to help ensure their communications (over email) aren’t vulnerable tomass surveillance."[4]Support for STARTTLS has become quite common; Google reports that on Gmail, 90% of incoming email and 90% of outgoing email was encrypted using STARTTLS by July 24, 2018.[5] Mandatory certificate verification is historically not viable for Internet mail delivery without additional information, because many certificates are not verifiable and few want email delivery to fail in that case.[6]As a result, most email that is delivered over TLS uses onlyopportunistic encryption.DANEis a proposed standard that makes an incremental transition to verified encryption for Internet mail delivery possible.[7]The STARTTLS Everywhere project uses an alternative approach: they support a “preload list” of email servers that have promised to support STARTTLS, which can help detect and preventdowngrade attacks. Inend-to-end encryption, the data is encrypted and decrypted only at the end points. In other words, an email sent with end-to-end encryption would be encrypted at the source, unreadable to service providers like Gmail in transit, and then decrypted at its endpoint. Crucially, the email would only be decrypted for the end user on their computer and would remain in encrypted, unreadable form to an email service like Gmail, which wouldn't have the keys available to decrypt it.[8]Some email services integrateend-to-end encryptionautomatically. Notableprotocolsfor end-to-end email encryption include: OpenPGPis a data encryption standard that allows end-users to encrypt the email contents. There are various software and email-client plugins that allow users to encrypt the message using the recipient's public key before sending it. At its core, OpenPGP uses aPublic Key Cryptographyscheme where each email address is associated with a public/private key pair. OpenPGP provides a way for the end users to encrypt the email without any support from the server and be sure that only the intended recipient can read it. However, there are usability issues with OpenPGP — it requires users to set up public/private key pairs and make the public keys available widely. Also, it protects only the content of the email, and not metadata — an untrusted party can still observe who sent an email to whom. A general downside of end to end encryption schemes—where the server does not have decryption keys—is that it makes server side search almost impossible, thus impacting usability. The content of an email can also be end-to-end encrypted by putting it in an encrypted file (using any kind of file encryption tool[9]) and sending that encrypted file as an email attachment.[10] TheSigned and Encrypted Email Over The Internetdemonstration has shown that organizations can collaborate effectively using secure email. Previous barriers to adoption were overcome, including the use of a PKI bridge to provide a scalablepublic key infrastructure(PKI) and the use of network securityguardschecking encrypted content passing in and out of corporate network boundaries to avoid encryption being used to hide malware introduction and information leakage. Transport layer encryption using STARTTLS must be set up by the receiving organization. This is typically straightforward; a valid certificate must be obtained and STARTTLS must be enabled on the receiving organization's email server. To prevent downgrade attacks organizations can send their domain to the 'STARTTLS Policy List'[11] Most full-featured email clients provide native support forS/MIMEsecure email (digital signingand messageencryptionusingcertificates). Other encryption options include PGP and GNU Privacy Guard (GnuPG). Free and commercial software (desktop application, webmail and add-ons) are available as well.[12] While PGP can protect messages, it can also be hard to use in the correct way. Researchers atCarnegie Mellon Universitypublished a paper in 1999 showing that most people couldn't figure out how to sign and encrypt messages using the current version of PGP.[13]Eight years later, another group of Carnegie Mellon researchers published a follow-up paper saying that, although a newer version of PGP made it easy to decrypt messages, most people still struggled with encrypting and signing messages, finding and verifying other people's public encryption keys, and sharing their own keys.[14] Because encryption can be difficult for users, security and compliance managers at companies and government agencies automate the process for employees and executives by using encryption appliances and services that automate encryption. Instead of relying on voluntary co-operation, automated encryption, based on defined policies, takes the decision and the process out of the users' hands. Emails are routed through a gateway appliance that has been configured to ensure compliance with regulatory and security policies. Emails that require it are automatically encrypted and sent.[15] If the recipient works at an organization that uses the same encryption gateway appliance, emails are automatically decrypted, making the process transparent to the user. Recipients who are not behind an encryption gateway then need to take an extra step, either procuring the public key, or logging into an online portal to retrieve the message.[15][16] Since 2000, the number of available encrypted email providers[17]has increased significantly.[18]
https://en.wikipedia.org/wiki/Electronic_envelope
Email encryptionisencryptionofemailmessages to protect the content from being read by entities other than the intended recipients. Email encryption may also includeauthentication. Email is prone to the disclosure of information. Although many emails are encrypted during transmission, they are frequently stored in plaintext, potentially exposing them to unauthorized access by third parties, including email service providers.[1]By default, popular email services such asGmailand Outlook do not enableend-to-end encryption.[2]Utilizing certain available tools, unauthorized individuals may access and read the email content.[3] Email encryption can rely onpublic-key cryptography, in which users can each publish apublic keythat others can use to encrypt messages to them, while keeping secret aprivatekey they can use to decrypt such messages or to digitally encrypt and sign messages they send. With the original design ofemail protocol, the communication between email servers was inplain text, which posed a hugesecurityrisk. Over the years, various mechanisms have been proposed to encrypt the communication between email servers. Encryption may occur at the transport level (aka "hop by hop") or end-to-end.Transport layer encryptionis often easier to set up and use; end-to-end encryption provides stronger defenses, but can be more difficult to set up and use. One of the most commonly used email encryption extensions isSTARTTLS. It is aTLS (SSL)layer over the plaintext communication, allowing email servers to upgrade theirplaintextcommunication to encrypted communication. Assuming that the email servers on both the sender and the recipient side support encrypted communication,An eavesdropper monitoring the communication between mail servers cannot use packet sniffing tools to view the email contents. Similar STARTTLS extensions exist for the communication between an email client and the email server (seeIMAP4andPOP3, as stated by RFC 2595). STARTTLS may be used regardless of whether the email's contents are encrypted using another protocol. The encrypted message is revealed, and can be altered by, intermediate email relays. In other words, the encryption takes place between individualSMTPrelays, not between the sender and the recipient. This has both good and bad consequences. A key positive trait of transport layer encryption is that users do not need to do or change anything; the encryption automatically occurs when they send email. In addition, since receiving organizations can decrypt the email without cooperation of the end user, receiving organizations can runvirusscanners and spam filters before delivering the email to the recipient. However, it also means that the receiving organization and anyone who breaks into that organization's email system (unless further steps are taken) can easily read or modify the email. If the receiving organization is considered a threat, then end-to-end encryption is necessary. TheElectronic Frontier Foundationencourages the use of STARTTLS, and has launched the 'STARTTLS Everywhere' initiative to "make it simple and easy for everyone to help ensure their communications (over email) aren’t vulnerable tomass surveillance."[4]Support for STARTTLS has become quite common; Google reports that on Gmail, 90% of incoming email and 90% of outgoing email was encrypted using STARTTLS by July 24, 2018.[5] Mandatory certificate verification is historically not viable for Internet mail delivery without additional information, because many certificates are not verifiable and few want email delivery to fail in that case.[6]As a result, most email that is delivered over TLS uses onlyopportunistic encryption.DANEis a proposed standard that makes an incremental transition to verified encryption for Internet mail delivery possible.[7]The STARTTLS Everywhere project uses an alternative approach: they support a “preload list” of email servers that have promised to support STARTTLS, which can help detect and preventdowngrade attacks. Inend-to-end encryption, the data is encrypted and decrypted only at the end points. In other words, an email sent with end-to-end encryption would be encrypted at the source, unreadable to service providers like Gmail in transit, and then decrypted at its endpoint. Crucially, the email would only be decrypted for the end user on their computer and would remain in encrypted, unreadable form to an email service like Gmail, which wouldn't have the keys available to decrypt it.[8]Some email services integrateend-to-end encryptionautomatically. Notableprotocolsfor end-to-end email encryption include: OpenPGPis a data encryption standard that allows end-users to encrypt the email contents. There are various software and email-client plugins that allow users to encrypt the message using the recipient's public key before sending it. At its core, OpenPGP uses aPublic Key Cryptographyscheme where each email address is associated with a public/private key pair. OpenPGP provides a way for the end users to encrypt the email without any support from the server and be sure that only the intended recipient can read it. However, there are usability issues with OpenPGP — it requires users to set up public/private key pairs and make the public keys available widely. Also, it protects only the content of the email, and not metadata — an untrusted party can still observe who sent an email to whom. A general downside of end to end encryption schemes—where the server does not have decryption keys—is that it makes server side search almost impossible, thus impacting usability. The content of an email can also be end-to-end encrypted by putting it in an encrypted file (using any kind of file encryption tool[9]) and sending that encrypted file as an email attachment.[10] TheSigned and Encrypted Email Over The Internetdemonstration has shown that organizations can collaborate effectively using secure email. Previous barriers to adoption were overcome, including the use of a PKI bridge to provide a scalablepublic key infrastructure(PKI) and the use of network securityguardschecking encrypted content passing in and out of corporate network boundaries to avoid encryption being used to hide malware introduction and information leakage. Transport layer encryption using STARTTLS must be set up by the receiving organization. This is typically straightforward; a valid certificate must be obtained and STARTTLS must be enabled on the receiving organization's email server. To prevent downgrade attacks organizations can send their domain to the 'STARTTLS Policy List'[11] Most full-featured email clients provide native support forS/MIMEsecure email (digital signingand messageencryptionusingcertificates). Other encryption options include PGP and GNU Privacy Guard (GnuPG). Free and commercial software (desktop application, webmail and add-ons) are available as well.[12] While PGP can protect messages, it can also be hard to use in the correct way. Researchers atCarnegie Mellon Universitypublished a paper in 1999 showing that most people couldn't figure out how to sign and encrypt messages using the current version of PGP.[13]Eight years later, another group of Carnegie Mellon researchers published a follow-up paper saying that, although a newer version of PGP made it easy to decrypt messages, most people still struggled with encrypting and signing messages, finding and verifying other people's public encryption keys, and sharing their own keys.[14] Because encryption can be difficult for users, security and compliance managers at companies and government agencies automate the process for employees and executives by using encryption appliances and services that automate encryption. Instead of relying on voluntary co-operation, automated encryption, based on defined policies, takes the decision and the process out of the users' hands. Emails are routed through a gateway appliance that has been configured to ensure compliance with regulatory and security policies. Emails that require it are automatically encrypted and sent.[15] If the recipient works at an organization that uses the same encryption gateway appliance, emails are automatically decrypted, making the process transparent to the user. Recipients who are not behind an encryption gateway then need to take an extra step, either procuring the public key, or logging into an online portal to retrieve the message.[15][16] Since 2000, the number of available encrypted email providers[17]has increased significantly.[18]
https://en.wikipedia.org/wiki/Email_encryption
Email privacy[1]is a broad topic dealing with issues of unauthorized access to, and inspection of,electronic mail, or unauthorized trackingwhen a user reads an email. This unauthorized access can happen while an email is in transit, as well as when it is stored onemail serversor on a user's computer, or when the user reads the message. In countries with a constitutional guarantee of thesecrecy of correspondence, whether email can be equated withletters—therefore having legal protection from all forms ofeavesdropping—is disputed because of the very nature of email.[2] In 2022,[1]a lookback at an 1890 law review article about personal privacy (the "right to be left alone”)[3]noted how "digital technology has been allowed to invade our lives" both by personal choice and behavior, and also by various forms of ongoing monitoring.[4] An email has to go through potentially untrustworthy intermediate computers (email servers,ISPs) before reaching its destination, and there is no way to verify if it was accessed by an unauthorized entity.[5]Through the process of information being sent from the user's computer to the email service provider, data acquisition is taking place, most of the time without the user knowing. There are certain data collection methods (routers) that are used for data privacy concerns, but there are others that can be harmful to the user.[6]This is different from a letter sealed in an envelope, where, by close inspection of the envelope, it might be possible to determine if it had been previously opened. In that sense, an email is much like a postcard, the contents of which are visible to anyone who handles it. There are certain technological workarounds that make unauthorized access to email difficult, if not impossible. However, since email messages frequently cross national boundaries, and different countries have different rules and regulations governing who can access an email, email privacy is a complicated issue. Companies may have email policies requiring employees to refrain from sending proprietary information and company classified information through personal emails or sometimes even work emails.[7]Co-workers are restricted from sending private information such as company reports,slide show presentationswith confidential information, or email memos.[8][9] In 2004, consumer privacy advocates and civil rights organizations urged Google to suspend Gmail over privacy rights concerns.[10]The 31 organizations signed a letter calling upon Google to be more transparent about its information handling practices regarding data retention and sharing within its business units. They voiced concerns about Google’s plan to scan the text of all incoming messages with the information to be used for ad placement. They noted specific concerns regarding the scanning confidential email for inserting third party ad content, which violates the implicit trust of email service providers, possibly establishing a dangerous precedent.[11] There are some technical workarounds to ensure better privacy of email communication. Although it is possible to secure the content of thecommunicationbetween emails, protecting themetadata, for instance who sent email to whom, is fundamentally difficult.[12]Even though certain technological measures exist, the widespread adoption is another issue because of reduced usability. According to Hilarie Orman, mail encryption was first developed in the mid-1980s.[13]She states that mail encryption is a powerful tool that protects one's email privacy.[13]Although it is widely available, it is rarely used, with the majority of email sent at risk of being read by third parties.[13]In general, encryption provides protection against malicious entities. However, a court order might force the responsible parties to hand over decryption keys, with a notable example beingLavabit.[14]Encryption can be performed at different levels of the email protocol. With the original design ofemail protocol, the communication between email servers was plain text, which posed a huge security risk. Over the years, various mechanisms have been proposed to encrypt the communication between email servers. One of the most commonly used extension isSTARTTLS. It is aTLS (SSL)layer over the plaintext communication, allowing email servers to upgrade their plaintext communication to encrypted communication. Assuming that the email servers on both the sender and the recipient side support encrypted communication, an eavesdropper snooping on the communication between the mail servers cannot see the email contents. Similar extensions exist for the communication between an email client and the email server. Inend-to-end encryption, the data is encrypted and decrypted only at the end points. In other words, an email sent with end-to-end encryption would be encrypted at the source, unreadable to email service providers in transit, and then decrypted at its endpoint. Crucially, the email would only be decrypted for the end user on their computer and would remain in the encrypted, unreadable form to an email service, which would not have the keys available to decrypt it.[15]Some email services integrate end-to-end encryption automatically. OpenPGPis a data encryption standard that allows end-users to encrypt the email contents. There are various software and email-client plugins that allow users to encrypt the message using the recipient's public key before sending it. At its core, OpenPGP uses aPublic Key Cryptographyscheme where each email address is associated with a public/private key pair.[16] OpenPGP provides a way for the end users to encrypt the email without any support from the server and be sure that only the intended recipient can read it. However, there are usability issues with OpenPGP—it requires users to set up public/private key pairs and make the public keys available widely. Also, it protects only the content of the email, and not metadata—an untrusted party can still observe who sent an email to whom. A general downside of end-to-end encryption schemes—where the server does not have decryption keys—is that it makes server side search almost impossible, thus impacting usability. The architecture of the system also affects the privacy guarantees and potential venues forinformation leakage. The email protocol was originally designed for email clients—programs that periodically download email from a server and store it on the user's computer. However, in recent years,[when?]webmailusage has increased due to the simplicity of usage and no need for the end users to install a program.Secure messagingis in use where an entity (hospitals, banks, etc.) wishes to control the dissemination of sensitive information. Secure messaging functions similarly to webmail, in that the user must log on to a website—operated by the company or entity in question—to read received messages. With both secure messaging and webmail, all email data is stored on the email provider's servers and thus subject to unauthorized access, or access by government agencies. However, in the case of email clients, it is possible to configure the client such that the client downloads a copy of the message as it arrives, which is deleted from the server. Although there is no way to guarantee whether a server has deleted its copy of an email, it still provides protection against situations where a benign email server operator is served with a court order. Although encryption provides for a way to protect the contents of the message, it still fails to protect the metadata. Theoretically,mix networkscan be used to protect the anonymity of communication (who contacted whom). Another workaround that has been used[17]is to save a message as a draft in a webmail system, and share the webmail login credentials with an intended recipient. As an example of adead drop, this method defeats any kind of monitoring based on the actual email sent. However, this method infamously failed to protect the privacy of the participants in thePetraeus scandal; after coming under investigation for unrelated activities, communication between the parties was accessed by theFBI.[18][19] Another aspect of email privacy is the privacy risk that arises from embedded filemetadatain email attachments. Such metadata can divulge privacy compromising data, both to unauthorized parties that gain access to the email message, as well as to the intended recipient of the email message. This problem can be mitigated by usingmetadata removal software. There are solution that integrate with email clients and remove metadata from outgoing email attachments. There are also server-based solutions, that automatically remove metadata from outgoing email messages at the organization network gateway. TheFourth Amendment to the United States Constitutionprovides that “[T]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.” This Amendment guarantees the privacy, dignity, and security of persons against certain arbitrary and invasive acts by officers of the government or those acting at their direction. The Fourth Amendment is often invoked to protect individual privacy rights against government activities. In the case of employer emails, although the words “the people” may appear to be broad and to include any employee, this amendment (or any other part of the United States constitution) has not been interpreted to protect the privacy interest of private-sector employees. By contrast, public-sector employees of federal, state, and local governments usually have privacy protection under the United States Constitution. The protection under the fourth Amendment is not unlimited. For example, inO'Connor v. Ortega, the officials at a State Hospital, after placing Dr. Magno Ortega on administrative leave pending an investigation into possible workplace improprieties, searched his office.[20]Dr. Ortega filed an action against the hospital alleging that the search violated his Fourth Amendment rights. The district court found that the search was proper, but on appeal the circuit court found that the search did violate Dr. Ortega's Fourth Amendment rights. The Supreme Court disagreed with both the lower courts. The Court's decision was based on consideration of two factors (i) whether Dr. Ortega had a reasonableexpectation of privacy, and (ii) whether the search of Dr. Ortega's office was reasonable. The Court held that because Dr. Ortega had a private office, he had a reasonable expectation of privacy. However, the Court also found the search of his office to be reasonable because it was work-related. It considered the government's need to ensure efficient operation of the workplace as outweighing an employee's expectation of privacy, even if the privacy expectation is reasonable. Since work environments vary, a public-sector employee's expectation of privacy must be determined on a case-by-case basis. Factors the Court considered included (i) notice to employees, (ii) exclusive possession by an employee of keys to a desk or file cabinet, (iii) the government's need for access to documents, and (iv) the government's need to protect records and property. In view of the Ortega decision, the extent of constitutional protection with respect to emails is unclear. Unlike a locked desk or file cabinet, emails are not locked; the employer has access to all messages on the system. Thus, it may be argued that with respect to email, the public-sector employee's legitimate expectations of privacy are diminished. In some cases, the US constitutional protection can also extend to private-sector employees. This is possible when a private-sector employee can demonstrate "involved sufficient government action".[21] State constitutions in at least 10 states (Alaska, Arizona, California, Florida, Hawaii, Illinois, Louisiana, Montana, South Carolina and Washington) grant individuals an explicit right to privacy. The privacy protections afforded by some of these states mirrors theFourth Amendmentof the US Constitution but often add more specific references to privacy. Further, general constitutional provisions in other states have also been interpreted by courts to have established privacy rights of various types. Like the rights under the US constitution, the privacy rights under state constitutions also usually extend to protection from the actions of state governments, not private organizations. In 1972,California amended Article I, Section 1of its state constitution to include privacy protections.[22]A California appellate court then held that the state's right of privacy applied to both public and private sector interests.[23]Further, inSoroka v. Dayton Hudson Corp., the California Court of Appeals reaffirmed this view and held that an employer may not invade the privacy of its employees absent a "compelling interest".[24] In August 2014, Missouri became the first state to provide explicit constitutional (art. I, § 15) protection from unreasonable searches and seizures for electronic communications or data, such as that found on cell phones and other electronic devices.[25] The real-time interception of the contents of electronic communication is prohibited under the wiretap act,[26]while thePen Register Act[27]provides protection from the interception of the non-content part of the electronic communication. The "From" and "To" fields along with theIP addressof the sender/receiver have been considered as non-content information,[28]while the subject has been considered as part of the content.[29] Unlike the European Union, which provides the General Data Protection Regulation (GDPR), the United States lacks an overall data privacy protection law. Once the email is stored on a computer (email server/user computer), it is protected from unauthorized access under theStored Communications Act(Title II ofElectronic Communications Privacy Act).[30] After 180 days in the US, email messages stored on a third party server lose their status as a protected communication under theElectronic Communications Privacy Act, and become just another database record.[31][32]After this time has passed, a government agency needs only asubpoena—instead of awarrant—in order to access email from a provider. However, if the emails are stored on a user's personal computer instead of a server, then that would require the police to obtain a warrant first to seize the contents. This has been criticized to be an obsolete law; at the time this law was written, extremely high-capacity storage on webmail servers was not available. In 2013, members of the US Congress proposed to reform this procedure.[33] An exception to these laws, however, is for email service providers.[34]Under the provider exception, the laws do not apply to "the person or entity providing a wire or electronic communications service."[35]This exception, for example, allows various free of charge email providers (Gmail,Yahoo Mail, etc.) to process user emails to displaycontextual advertising. Another implication of the provider exception is access by employers. Email sent by employees through their employer's equipment has no expectation of privacy, as the employer may monitor all communications through their equipment.[citation needed]According to a 2005 survey by theAmerican Management Association, about 55% of US employers monitor and read their employees' email.[36]Attorney–client privilegeis not guaranteed through an employer's email system, with US courts rendering contradictory verdicts on this issue.[37]Generally speaking, the factors courts use to determine whether companies can monitor and read personal emails in the workplace include: (i) the use of a company email account versus a personal email account and (ii) the presence of a clear company policy notifying employees that they should have no expectation of privacy when sending or reading emails at work, using company equipment, or when accessing personal accounts at work or on work equipment.[38] Privacy protections of electronic communications vary from state to state. Most states address these issues through either wiretapping legislation or electronic monitoring legislation or both.[39] Unlike the EPCA, most state statutes do not explicitly cover email communications. In these states a plaintiff may argue that the courts should interpret these statutes to extend protection to email communications. A plaintiff can argue that the wiretapping statutes reflect the general intent of the legislature to protect the privacy of all communications that travel across the telephone line (including emails). Further, the plaintiff may argue that email communications may be analogized to telegraphic communications, which are explicitly protected under most state statutes.[39] Generally, such efforts are not effective in protecting email privacy. For example, inShoars vs. Epson America, Inc.case (Cal. Sup. Ct. filed July 30, 1990) a California superior court refused to find employee email privacy protection in California's criminal code.[clarification needed]California Penal Code Section 631 prohibits wire-tapping without the consent of all parties involved, adding that a person may not "read or attempt to read, learn the contents or meaning of any message, report, or communication while the same is in tran- sit or passing over any such wire, line, or cable, or is being sent from, or received at any place within the state."[40]The court dismissed the lawsuit, ruling that Section 631 did not apply since the legislation did not specifically refer to email communication. The protection of email privacy under the statecommon lawis evolving[timeframe?]through state court decisions. Under the common law the email privacy is protected under thetort of invasion of privacyand the causes of action related to this tort.[39]Four distinct torts protect the right of privacy. These are (i) unreasonable intrusion upon the seclusion of another, (ii) misappropriation of others name and likeliness; (iii) unreasonable publicity given to another's private life and (iv) publicity that unreasonably places another in a false light before the public. Of these the tort of "unreasonable intrusion upon the seclusion of another" is most relevant to the protection of email privacy.[39]"Unreasonable intrusion upon seclusion of another" states that the invasion was intended to be private and the invasion was offensive to an individual.[41] The fifty-five article longCharter of Fundamental Rights of the European Uniongrants certain fundamental rights such as "right to be left alone" and "respect for private life" to both theEuropean Union citizensand the residents.[42]According to article 7 of the charter, everyone has the right to respect for his or her private and family life, home, and communications. The charter came into full legal effect when theLisbon Treatywas signed on 1 December 2009. The individual member states cannot enforce local laws that are contradictory to what they have already agreed upon as a European Union member. It was established inCosta v ENELthat theEuropean Union lawis placed above the laws of its individual member states. Most employers make employees sign an agreement that grants the right to monitor their email and computer usage. Signing this agreement normally deprives an employee of any reasonable expectation of privacy which means that employer can legally search through employee emails. Even without an agreement, courts have rarely found that the employee had areasonable expectationof privacy to their email at work for a variety of reasons. For example, one court held that emails used in a business context are simply a part of the office environment, the same as afaxorcopy machine, in which one does not have a reasonable expectation of privacy. Another court found that by corresponding with other people at work, work email was inherently work-related, and thus there could be no reasonable expectation of privacy. Employers usually do not have very many obstacles preventing them from searching employee emails. Employers may take the position that employees are sending communications from their equipment that could affect their business; this is usually considered to be a sufficient justification to search through employee emails.[citation needed][43]Employers may also monitor work emails to ensure the email system is being used appropriately for work purposes. Furthermore, asworkplace harassmentlawsuits are prevalent, one way for employers to protect themselves from liability is to monitor and attempt to prevent any harassment in the first place. Many employers run software that searches for offensive words and highlights problematic emails.[citation needed]The other main concern with liability is that old emails may be used against the employer in a lawsuit.[44]Many employers consider themonitoringof emails to be a right, as well as a necessity, because they take ownership of the resources. The justifications that employers use to reason their monitoring appears to belegal, like preventing misuse of resources that they own.[45] Beyond the lack of privacy for employee email in a work setting, there is the concern that a company's proprietary information, patents, and documents could be leaked, intentionally or unintentionally. This concern is seen in for-profit businesses, non-profit firms, government agencies, and other sorts of start-ups and community organizations. Firms usually ask employees or interns to not send work-related material to personal emails or through social media accounts, for example. Even within the firm's email network and circle of connections, important information could still be leaked or stolen by competitors.[46]In order to remedy this, many firms hold training sessions for employees that go over common unethical[according to whom?]practices, what employees should do in order to share files/send emails, and how employees can report incidences where company information is in jeopardy. This way of training employees enables employees to understand email privacy and know what type of information can be shared and what documents and information cannot be shared with others. The information privacy agreement that states an employee cannot send proprietary information to others applies not just to people outside the firm but also other employees in the firm. Most firms, for example, do not allow employees to exchange slide show presentations or slide decks that contain proprietary information through personal emails. Government employees have further reduced privacy than the private sector employees. Under various public records acts and theFreedom of Information Act(FOIA), the public can gain access to almost anything a government employee writes down. Government employees may also have their personal emails subject to disclosure if the email pertains to government business.[47]Due to the nature of their job, courts are typically unwilling to find that government employees had a reasonable right to privacy in the first place.[44] Unlike work emails, personal email from one's personal email account and computer is more likely to be protected as there is a much more reasonable expectation of privacy, but even personal emails may not be fully protected. Because emails are stored locally, at the ISP, and on the receiving end, there are multiple points at whichsecurity breakersor law enforcement can gain access to them. While it may be difficult for law enforcement to legally gain access to an individual's personal computer, they may be able to gain access to the person's emails easily from the ISP. ISPs are also increasingly creating End User Service Agreements that users must agree to abide by. These agreements reduce any expectation of privacy, and often include terms that grant the ISP the right to monitor the network traffic or turn over records at the request of a government agency.[44] Mental healthcare professionals frequently use email for scheduling appointments and delivering treatments, offering benefits such as permanence and spontaneity compared to oral conversations. However, communicatingProtected Health Information(PHI) via email poses risks due to vulnerabilities in email systems and the potential for unintended breaches. Providers have less control over third-party email systems, increasing the likelihood of confidentiality breaches through human error,malicious acts, orphishing attacks.[48] From the documents leaked by ex-NSA contractorEdward Snowden, it became well known that various governments have been running programs to tap all kinds of communication at massive scales, including email. While the legality of this is still under question,[timeframe?]it is clear that the email of citizens with no ties to a terrorist organization have been intercepted and stored. Whistleblower and formerNational Security Agency(NSA) employee William Binney has reported that the NSA has collected over 20 trillion communications via interception,[49]including many email communications, representing one aspect of theNSA warrantless surveillance controversy. A lawsuit filed by theAmerican Civil Liberties Unionand other organizations alleges thatVerizonunlawfully gave the US government unrestricted access to its entire Internet traffic without a warrant and thatAT&Thad a similar arrangement with theNational Security Agency.[50]While the FBI and NSA maintain that all their activities were and are legal, Congress passed theFISA Amendments Act of 2008 (FAA)granting AT&T and Verizon immunity from prosecution.[51] Spy pixels, which report private details (IP address, time of reading the email, event of reading the email) to the sender of the email without the recipient's conscious approval to send the information, were described as "endemic" in February 2021. The "Hey" email service, contacted byBBC News, estimated that it blocked spy pixels in about 600,000 out of 1,000,000 messages per day.[52][53]
https://en.wikipedia.org/wiki/Email_privacy
Gpg4winis an email and file encryption package for most versions ofMicrosoft WindowsandMicrosoft Outlook, which utilises theGnuPGframework forsymmetricandpublic-key cryptography, such as data encryption,digital signatures,hash calculationsetc. The original creation of Gpg4win was initiated and funded by Germany'sFederal Office for Information Security(BSI) in 2005,[3][4]resulting in the release of Gpg4win 1.0.0 on 6 April 2006;[5]however Gpg4win and all included tools arefree and open source software, and it is typically the non-proprietary option for privacy recommended[6][7]to Windows users. As Gpg4win v1 was a much overhauled derivate of GnuPP,[8]both were usingGnuPG v1for cryptographic operations and thus only supportedOpenPGPas cryptography standard. Hence in 2007 the development of a fundamentally enhanced version was started, also with support from the German BSI (Federal Office for Information Security); this effort culminated in the release of Gpg4win 2.0.0 on 7 August 2009 after a protracted beta testing phase,[9]which was based on GnuPG 2.0, includedS/MIMEsupport, Kleopatra as a new certificate manager, theExplorerplug-in GpgEX for cryptography operations on files, basic support ofsmart cards, a full set of German dialogue texts in addition to the English ones, new manuals in English and German, plus many other enhancements.[10] In contrast to Gpg4win v2, which focused on new features and software components, the development of Gpg4win v3 focused on usability, plus consolidation of code and features:[11]This resulted in the release of Gpg4win 3.0.0 on 19 September 2017 with proper support forElliptic Curve Cryptography (ECC)by utilising GnuPG 2.2 (instead of 2.0), broadened, stabilised and enhanced smart card support, a fundamentally overhauledOutlookplug-in GpgOL for Outlook 2010 and newer, support of 64-bit versions of Outlook 2010 and newer, supporting dialogues in all languages which KDE supports etc.[12]It is also distributed as GnuPG VS-Desktop with commercial support and approval for handlingNATO RESTRICTED,RESTREINT UE/EU RESTRICTEDandGerman VS-NfDdocuments, which in turn has become the major source of revenue for maintaining and further developing the GnuPG framework and Gpg4win.[13] Gpg4win 4.0.0, released on 21 December 2021,[14]switched to using GnuPG 2.3 (from 2.2) and continued to refine and enhance the feature set of Gpg4win v3.[15]
https://en.wikipedia.org/wiki/Gpg4win
Incomputer security, akey serveris a computer that receives and then serves existing cryptographic keys to users or other programs. The users' programs can be running on the same network as the key server or on another networked computer. The keys distributed by the key server are almost always provided as part of a cryptographically protected public key certificates containing not only the key but also 'entity' information about the owner of the key. The certificate is usually in a standard format, such as the OpenPGP public key format, theX.509certificate format, or the PKCS format. Further, the key is almost always a public key for use with an asymmetric key encryption algorithm. Key servers play an important role inpublic key cryptography. In public key cryptography an individual is able to generate akey pair, where one of the keys is kept private while the other is distributed publicly. Knowledge of the public key does not compromise the security of public key cryptography. An individual holding the public key of a key pair can use that key to carry out cryptographic operations that allow secret communications with strong authentication of the holder of the matching private key. The need to have the public key of a key pair in order to start communication or verify signatures is a bootstrapping problem. Locating keys on the web or writing to the individual asking them to transmit their public keys can be time consuming and unsecure. Key servers act as central repositories to alleviate the need to individually transmit public keys and can act as the root of achain of trust. The first web-basedPGPkeyserver was written for a thesis by Marc Horowitz,[1]while he was studying atMIT. Horowitz's keyserver was called the HKP Keyserver after a web-based OpenPGP HTTP Keyserver Protocol (HKP),[2]used to allow people to interact with the keyserver. Users were able to upload, download, and search keys either through HKP on TCP port 11371, or through web pages which ran CGI scripts. Before the creation of the HKP Keyserver, keyservers relied on email processing scripts for interaction. A separate key server, known as the PGP Certificate Server, was developed byPGP, Inc.and was used as the software (through version 2.5.x for the server) for the default key server in PGP through version 8.x (for the client software), keyserver.pgp.com.Network Associateswas granted apatentco-authored byJon Callas(United States Patent 6336186)[3]on the key server concept. To replace the aging Certificate Server, anLDAP-based key server was redesigned atNetwork Associatesin part byRandy HarmonandLen Sassaman, called PGP Keyserver 7. With the release of PGP 6.0, LDAP was the preferred key server interface for Network Associates’ PGP versions. This LDAP and LDAPS key server (which also spoke HKP for backwards compatibility, though the protocol was (arguably correctly) referred to as “HTTP” or “HTTPS”) also formed the basis for the PGP Administration tools for private key servers in corporate settings, along with aschemaforNetscape Directory Server. PGP Keyserver 7 was later replaced by the newPGP CorporationPGP Global Directory of 2011 which allows PGP keys to be published and downloaded using HTTPS or LDAP.[4] The OpenPGP world largely used its own development of keyserver software independent from the PGP Corporation suite. The main software used until the 2019 spamming attack was "SKS" (Synchronizing Key Server), written by Yaron Minsky.[5]The public SKS pool (consisting of many interconnected SKS instances) provided access via HKPS (HKP with TLS) and HTTPS. It finally shut down in 2021 following a number ofGDPRthat it was unable to process effectively.[6] A number of newer pools using other software has been made available following the shutdown of the SKS pool, see§ Keyserver examples. Many publicly accessible key servers, located around the world, are computers which store and provideOpenPGPkeys over the Internet for users of thatcryptosystem. In this instance, the computers can be, and mostly are, run by individuals as apro bonoservice, facilitating theweb of trustmodel PGP uses. Several publicly accessibleS/MIME key serversare available to publish or retrieve certificates used with theS/MIMEcryptosystem. There are also multiple proprietarypublic key infrastructuresystems which maintain key servers for their users; those may be private or public, and only the participating users are likely to be aware of those keyservers at all. The OpenPGP keyservers since their development in 1990s suffered from a few problems. Once a public key has been uploaded, it was purposefully made difficult to remove it as servers auto-synchronize between each other (it was done in order to fight government censorship). Some users stop using their public keys for various reasons, such as when they forget their pass phrase, or if their private key is compromised or lost. In those cases, it was hard to delete a public key from the server, and even if it were deleted, someone else can upload a fresh copy of the same public key to the server. This leads to an accumulation of old fossil public keys that never go away, a form of "keyserver plaque". The lack of a retraction mechanism also breached the EuropeanGeneral Data Protection Regulation, which was cited as a reason for the closure of the SKS pool.[6]Modern PGP keyservers allow deletion of keys. Because only the owner of a key's e-mail address can upload a key (see next section) in such servers, the key stays deleted unless the owner decides otherwise. The keyserver also had no way to check to see if the key was legitimate (belong to true owner). As consequence anyone can upload a bogus public key to the keyserver, bearing the name of a person who in fact does not own that key, or even worse, use it as vulnerability: the Certificate Spamming Attack.[5][7]: §2.2 Modern keyservers, starting with the PGP Global Directory, now use the e-mail address for confirmation. This keyserver sent an email confirmation request to the putative key owner, asking that person to confirm that the key in question is theirs. If they confirm it, the PGP Global Directory accepts the key. The confirmation can be renewed periodically, to prevent the accumulation of keyserver plaque. The result is a higher quality collection of public keys, and each key has been vetted by email with the key's apparent owner. But as consequence, another problem arise: because PGP Global Directory allows key account maintenance and verifies only by email, not cryptographically, anybody having access to the email account could for example delete a key and upload a bogus one. The last Internet Engineering Task Force draft for HKP also defines a distributed key server network, based on DNSSRV records: to find the key ofsomeone@example.com, one can ask it by requestingexample.com's key server. For many individuals, the purpose of using cryptography is to obtain a higher level ofprivacyin personal interactions and relationships. It has been pointed out that allowing a public key to be uploaded in a key server when using decentralized web of trust based cryptographic systems, like PGP, may reveal a good deal of information that an individual may wish to have kept private. Since PGP relies on signatures on an individual's public key to determine the authenticity of that key, potential relationships can be revealed by analyzing the signers of a given key. In this way, models of entire social networks can be developed. (Mike Perry's 2013 criticism of the Web of Trust mentions the issue as already been "discussed at length".)[8] A number of modern key servers remove third-party signatures from the uploaded key. Doing so removes all personal connections into the Web of Trust, thus preventing any leakage from happening. The main goal, however, was to minimize the storage space required, as "signature spamming" can easily add megabytes to a key.[9][7]: §2.1 These are some keyservers that are often used for looking up keys withgpg --recv-keys.[10]These can be queried viahttps://(HTTPS) orhkps://(HKP overTLS) respectively.
https://en.wikipedia.org/wiki/Key_server_(cryptographic)
PGP Virtual Diskis adisk encryptionsystem that allows one to create a virtualencrypted diskwithin a file. Older versions for Windows NT were freeware (for example, bundled withPGPv6.0.2i; and with some of the CKT builds of PGP). These are still available for download, but no longer maintained. Today, PGP Virtual Disk is available as part of thePGP Desktopproduct family, running onWindows 2000/XP/Vista, andMac OS X. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/PGPDisk
pretty Easy privacy(p≡porpEp) was a pluggabledata encryptionand verification system that provided automaticcryptographic keymanagement through a set of libraries for written digital communications. It existed as apluginforMicrosoft Outlook[1]andMozilla Thunderbird[2]as well as a mobile app forAndroid[3][4]and iOS.[5]p≡p also worked underMicrosoft Windows,Unix-likeandMac OS Xoperating systems. Itscryptographicfunctionality was handled by anopen-sourcep≡p engine relying on already existing cryptographic implementations in software likeGnuPG, a modified version of netpgp (used only in iOS), and (as of p≡p v2.0)GNUnet. pretty Easy privacy was first released in 2016.[6]It is afree and open-source software. p≡p was advertised as being easy to install, use, and understand. p≡p did not depend on any specific platform, message transport system (SMS, email,XMPP, etc.), or centrally provided client–server or "cloud" infrastructures; p≡p is fullypeer-to-peerby design.[7] Keys are exchanged opportunistically by transferring via email.[8] Enigmailannounced its support for the new "pretty Easy privacy" (p≡p) encryption in a jointThunderbirdextension to be released in December 2015.[9]Patrick Brunschwig, the head of Enigmail, announced that p≡p core functionality was implemented in Enigmail in October 2016, ready for theMozilla Festivalthen taking place inLondon.[10] In July 2020, Thunderbird 78 dropped support for the Enigmail Add-On.[11]Thunderbird 78 includes OpenPGP functionality and no longer requires the installation of external software.[12] The Internet Society Switzerland Chapter (ISOC-CH) and the Swiss p≡p foundation teamed up[13]to implement privacy-enhancing standards at the basic level of internet protocols, and document them in the work of theInternet Engineering Task Force(IETF). In March 2021, reports surfaced that p≡p had paid for fake reviews for their apps.[14] As of January 2024, the company overseeing p≡p is not operational. Its website no longer functions, and development of the system has ceased.
https://en.wikipedia.org/wiki/Pretty_Easy_privacy
S/MIME(Secure/Multipurpose Internet Mail Extensions) is a standard forpublic-key encryptionand signing ofMIMEdata. S/MIME is on an IETF standards track and defined in a number of documents, most importantlyRFC8551. It was originally developed by RSA Data Security, and the original specification used the IETF MIME specification[1]with the de facto industry standardPKCS #7secure message format. Change control to S/MIME has since been vested in the IETF, and the specification is now layered onCryptographic Message Syntax(CMS), an IETF specification that is identical in most respects with PKCS #7. S/MIME functionality is built into the majority of modern email software and interoperates between them. Since it is built on CMS, MIME can also hold an advanced digital signature. S/MIME provides the following cryptographic security services for electronic messaging applications: S/MIME specifies the MIME typeapplication/pkcs7-mime[2](smime-type "enveloped-data") for data enveloping (encrypting) where the whole (prepared) MIME entity to be enveloped is encrypted and packed into an object which subsequently is inserted into an application/pkcs7-mime MIME entity. Before S/MIME can be used in any of the above applications, one must obtain and install an individual key/certificate either from one's in-housecertificate authority(CA) or from a public CA. The acceptedbest practiceis to use separate private keys (and associated certificates) for signature and for encryption, as this permitsescrowof the encryption key without compromise to thenon-repudiationproperty of the signature key. Encryption requires having the destination party's certificate on store (which is typically automatic upon receiving a message from the party with a valid signing certificate). While it is technically possible to send a message encrypted (using the destination party certificate) without having one's own certificate to digitally sign, in practice, the S/MIME clients will require the user to install their own certificate before they allow encrypting to others. This is necessary so the message can be encrypted for both, recipient and sender, and a copy of the message can be kept (in the sent folder) and be readable for the sender. A typicalbasic("class 1") personal certificate verifies the owner's "identity" only insofar as it declares that the sender is the owner of the "From:" email address in the sense that the sender can receive email sent to that address, and so merely proves that an email received really did come from the "From:" address given. It does not verify the person's name or business name. If a sender wishes to enable email recipients to verify the sender's identity in the sense that a received certificate name carries the sender's name or an organization's name, the sender needs to obtain a certificate ("class 2") from a CA, who carries out a more in-depth identity verification process, and this involves making inquiries about the would-be certificate holder. For more detail on authentication, seedigital signature. Depending on the policy of the CA, the certificate and all its contents may be posted publicly for reference and verification. This makes the name and email address available for all to see and possibly search for. Other CAs only post serial numbers and revocation status, which does not include any of the personal information. The latter, at a minimum, is mandatory to uphold the integrity of the public key infrastructure. In 2020, the S/MIME Certificate Working Group[3]of theCA/Browser Forumwas chartered to create a baseline requirement applicable to CAs that issue S/MIME certificates used to sign, verify, encrypt, and decrypt email. That effort is intended to create standards including: Version 1 of the Baseline Requirements for the Issuance and Management of Publicly‐Trusted S/MIME Certificates was published on January 1, 2023 by the CA/Browser Forum. It defined four types of S/MIME certificate standards. Mailbox‐validated, Organization‐validated, Sponsor‐validated and Individual‐validated.[4] Any message that an S/MIME email client stores encrypted cannot be decrypted if the applicable key pair's private key is unavailable or otherwise unusable (e.g., the certificate has been deleted or lost or the private key's password has been forgotten). However, an expired, revoked, or untrusted certificate will remain usable for cryptographic purposes. Indexing of encrypted messages'clear textmay not be possible with all email clients. Neither of these potential dilemmas is specific to S/MIME but rather cipher text in general and do not apply to S/MIME messages that are only signed and not encrypted. S/MIME signatures are usually "detached signatures": the signature information is separate from the text being signed. The MIME type for this ismultipart/signedwith the second part having a MIME subtype ofapplication/(x-)pkcs7-signature. Mailing list software is notorious for changing the textual part of a message and thereby invalidating the signature; however, this problem is not specific to S/MIME, and a digital signature only reveals that the signed content has been changed. On May 13, 2018, theElectronic Frontier Foundation(EFF) announced critical vulnerabilities in S/MIME, together with an obsolete form of PGP that is still used, in many email clients.[5]DubbedEFAIL, the bug required significant coordinated effort by many email client vendors to fix.[6]Mitigations for both Efail vulnerabilities have since been addressed in the security considerations section ofRFC8551.
https://en.wikipedia.org/wiki/S/MIME
ZRTP(composed of Z andReal-time Transport Protocol) is a cryptographickey-agreement protocolto negotiate thekeysforencryptionbetween two end points in aVoice over IP(VoIP) phone telephony call based on theReal-time Transport Protocol. It usesDiffie–Hellman key exchangeand theSecure Real-time Transport Protocol(SRTP) for encryption. ZRTP was developed byPhil Zimmermann, with help fromBryce Wilcox-O'Hearn, Colin Plumb,Jon Callasand Alan Johnston and was submitted to theInternet Engineering Task Force(IETF) by Zimmermann, Callas and Johnston on March 5, 2006 and published on April 11, 2011 asRFC6189. ZRTP ("Z" is a reference to its inventor, Zimmermann; "RTP" stands for Real-time Transport Protocol)[1]is described in theInternet Draftas a"key agreement protocol which performs Diffie–Hellman key exchange during call setup in-band in the Real-time Transport Protocol (RTP) media stream which has been established using some other signaling protocol such asSession Initiation Protocol(SIP). This generates a shared secret which is then used to generate keys and salt for a Secure RTP (SRTP) session."One of ZRTP's features is that it does not rely on SIP signaling for the key management, or on any servers at all. It supportsopportunistic encryptionby auto-sensing if the other VoIP client supports ZRTP. This protocol does not require prior shared secrets or rely on aPublic key infrastructure(PKI) or on certification authorities, in fact ephemeral Diffie–Hellman keys are generated on each session establishment: this allows the complexity of creating and maintaining a trusted third-party to be bypassed. These keys contribute to the generation of the session secret, from which the session key and parameters for SRTP sessions are derived, along with previously shared secrets (if any): this gives protection againstman-in-the-middle (MiTM) attacks, so long as the attacker was not present in the first session between the two endpoints. ZRTP can be used with any signaling protocol, including SIP,H.323,Jingle, anddistributed hash tablesystems. ZRTP is independent of the signaling layer, because all its key negotiations occur via the RTP media stream. ZRTP/S, a ZRTP protocol extension, can run on any kind of legacy telephony networks including GSM, UMTS, ISDN, PSTN,SATCOM,UHF/VHFradio, because it is a narrow-band bitstream-oriented protocol and performs all key negotiations inside the bitstream between two endpoints. Alan Johnston named the protocol ZRTP because in its earliest Internet drafts it was based on adding header extensions to RTP packets, which made ZRTP a variant of RTP. In later drafts the packet format changed to make it syntactically distinguishable from RTP. In view of that change, ZRTP is now apseudo-acronym. TheDiffie–Hellman key exchangeby itself does not provide protection against a man-in-the-middle attack. To ensure that the attacker is indeed not present in the first session (when no shared secrets exist), theShort Authentication String(SAS) method is used: the communicating parties verbally cross-check a shared value displayed at both endpoints. If the values do not match, a man-in-the-middle attack is indicated. A specific attack theorized against the ZRTP protocol involves creating a synthetic voice of both parties to read a bogus SAS which is known as a "Rich Littleattack", but this class of attack is not believed to be a serious risk to the protocol's security.[2]The SAS is used to authenticate the key exchange, which is essentially acryptographic hashof the two Diffie–Hellman values. The SAS value is rendered to both ZRTP endpoints. To carry out authentication, this SAS value is read aloud to the communication partner over the voice connection. If the values on both ends do not match, a man-in-middle attack is indicated; if they do match, a man-in-the-middle attack is highly unlikely. The use of hash commitment in the DH exchange constrains the attacker to only one guess to generate the correct SAS in the attack, which means the SAS may be quite short. A 16-bit SAS, for example, provides the attacker only one chance out of 65536 of not being detected. ZRTP provides a second layer of authentication against a MitM attack, based on a form of key continuity. It does this by caching some hashed key information for use in the next call, to be mixed in with the next call's DH shared secret, giving it key continuity properties analogous toSSH. If the MitM is not present in the first call, he is locked out of subsequent calls. Thus, even if the SAS is never used, most MitM attacks are stopped because the MitM was not present in the first call. ZRTP has been implemented as Commercial implementations of ZRTP are available in RokaCom from RokaCom,[13]and PrivateWave Professional from PrivateWave[14]and more recently in Silent Phone from Silent Circle, a company founded by Zimmermann.[15]There is also Softphone from Acrobits.[16]Drayteksupport ZRTP in some of their VoIP hardware and software.[17][18] A list of free SIP Providers with ZRTP support has been published.[11]
https://en.wikipedia.org/wiki/ZRTP
Analter ego(Latinfor "other I") means an alternateself, which is believed to be distinct from a person's normal or true originalpersonality. Finding one's alter ego will require finding one's other self, one with a different personality. Additionally, the altered states of the ego may themselves be referred to asalterations. A distinct meaning ofalter egois found in theliterary analysisused when referring to fictional literature and other narrative forms, describing a keycharacterin a story who is perceived to be intentionally representative of the work's author (or creator), by oblique similarities, in terms ofpsychology, behavior speech, or thoughts, often used to convey the author's thoughts. The term is also sometimes, but less frequently, used to designate ahypothetical"twin" or "best friend" to a character in a story. Similarly, the termalter egomay be applied to the role or persona taken on by an actor[1]or by other types of performers. Cicerocoined the term as part of his philosophical construct in 1st-centuryRome, but he described it as "a second self, a trusted friend".[2][citation needed] The existence of "another self" was first fully recognized in the 18th century, whenAnton Mesmerand his followers usedhypnosisto separate the alter ego.[3]These experiments showed a behavior pattern that was distinct from the personality of the individual when he was in thewaking statecompared with when he was under hypnosis. Another character had developed in the altered state ofconsciousnessbut in the same body.[4] Sigmund Freud, throughout his career, would appeal to such instances of dual consciousness to support his thesis of the unconscious.[5]He considered that "We may most aptly describe them as cases of a splitting of the mental activities into two groups, and say that the same consciousness turns to one or the other of these groups alternately".[6]Freud considered the roots of the phenomenon of the alter ego to be in thenarcissistic stageof early childhood.[7]Heinz Kohutwould identify a specific need in that early phase for mirroring, by another which resulted later in what he called the "twinship or alter ego transference".[8]
https://en.wikipedia.org/wiki/Alter_ego
Anonymity[a]describes situations where the acting person's identity is unknown. Anonymity may be created unintentionally through the loss of identifying information due to the passage of time or a destructive event, or intentionally if a person chooses to withhold their identity. There are various situations in which a person might choose to remain anonymous. Acts ofcharityhave been performed anonymously when benefactors do not wish to be acknowledged. A person who feels threatened might attempt to mitigate that threat through anonymity. A witness to a crime might seek to avoid retribution, for example, by anonymously calling a crime tipline. In many other situations (like conversation between strangers, or buying some product or service in a shop), anonymity is traditionally accepted as natural. Some writers have argued that the term "namelessness", though technically correct, does not capture what is more centrally at stake in contexts of anonymity. The important idea here is that a person benon-identifiable, unreachable, or untrackable.[1]Anonymity is also seen as a way to realize certain other values, such asprivacyor liberty. An important example of anonymity being not only protected, but enforced, by law is in voting infree elections. Criminals might proceed anonymously to conceal their participation in a crime. In certain situations, however, it may be illegal to remain anonymous. For example,24 of the U.S. states have "stop and identify" statutesthat require persons detained to self-identify when requested by a law enforcement officer, when the person is reasonably suspected of committing a crime. Over the past few years, anonymity tools used on thedark webby criminals and malicious users have drastically altered the ability of law enforcement to use conventional surveillance techniques.[2][3] The term "anonymous message" typically refers to a message that does not reveal its sender. In many countries, anonymous letters are protected by law and must be delivered as regular letters. Inmathematics, in reference to an arbitrary element (e.g., a human, an object, acomputer), within a well-definedset(called the "anonymity set"), "anonymity" of that element refers to the property of that element of not being identifiable within this set. If it is not identifiable, then the element is said to be "anonymous". The word anonymous was borrowed into English around 1600 from the Late Latin word "anonymus", from Ancient Greek ᾰ̓νώνῠμος (anṓnumos, "without name"), from ᾰ̓ν- (an-, "un-") with ὄνῠμᾰ (ónuma), Aeolic and Doric dialectal form of ὄνομᾰ (ónoma, "name"). Sometimes a person may desire a long-term relationship (such as a reputation) with another party without necessarily disclosingpersonally identifying informationto that party. In this case, it may be useful for the person to establish a unique identifier, called apseudonym. Examples of pseudonyms arepen names,nicknames,credit card numbers, student numbers,bank accountnumbers, etc. A pseudonym enables the other party to link different messages from the same person and, thereby, to establish a long-term relationship. Pseudonyms are widely used insocial networksand other virtual communication, although recently some important service providers like Google try to discourage pseudonymity.[4][circular reference]Someone using a pseudonym would be strictly considered to be using "pseudonymity" not "anonymity", but sometimes the latter is used to refer to both (in general, a situation where the legal identity of the person is disguised). Anonymity may reduce the accountability one perceives to have for their actions, and removes the impact these actions might otherwise have on their reputation. This can have dramatic effects, both useful and harmful to various parties involved. Thus, it may be used for psychological tactics involving any respective party to purport or support or discredit any sort of activity or belief. In conversational settings, anonymity may allow people to reveal personal history and feelings without fear of later embarrassment. Electronic conversational media can provide physical isolation, in addition to anonymity. This prevents physical retaliation for remarks, and prevents negative ortaboobehavior or discussion from tarnishing the reputation of the speaker. This can be beneficial when discussing very private matters, or taboo subjects or expressing views or revealing facts that may put someone in physical, financial, or legal danger (such asillegalactivity, or unpopular, or outlawed political views). In work settings, the three most common forms of anonymous communication are traditional suggestion boxes, written feedback, andCaller IDblocking. Additionally, the appropriateness of anonymous organizational communication varies depending on the use, with organizational surveys or assessments typically perceived as highly appropriate and firing perceived as highly inappropriate. Anonymity use and appropriateness have also been found to be significantly related to the quality of relationships with key others at work.[5] With few perceived negative consequences, anonymous or semi-anonymous forums often provide a soapbox for disruptive conversational behavior. The term "troll" is sometimes used to refer to those who engage in such disruptive behavior. Relative anonymity is often enjoyed in large crowds. Different people have different psychological and philosophical reactions to this development, especially as a modern phenomenon. This anonymity is an important factor incrowd psychology, and behavior in situations such as ariot. This perceived anonymity can be compromised by technologies such asphotography.Groupthinkbehavior andconformityare also considered to be an established effect of internet anonymity.[6] Anonymity also permits highly trained professionals such asjudgesto freely express themselves regarding the strategies they employ to perform their jobs objectively.[7] Anonymous commercial transactions can protect the privacy of consumers. Some consumers prefer to use cash when buying everyday goods (like groceries or tools), to prevent sellers from aggregating information or soliciting them in the future. Credit cards are linked to a person's name, and can be used to discover other information, such as postal address, phone number, etc. Theecashsystem was developed to allow secure anonymous transactions. Another example would be Enymity, which actually makes a purchase on a customer's behalf. When purchasing taboo goods and services, anonymity makes many potential consumers more comfortable with or more willing to engage in the transaction. Manyloyalty programsuse cards that personally identify the consumer engaging in each transaction (possibly for later solicitation, or for redemption or security purposes), or that act as a numericalpseudonym, for use indata mining. Anonymity can also be used as a protection against legal prosecution. For example, when committing unlawful actions, many criminals attempt to avoid identification by the means of obscuring/covering their faces withscarvesormasks, and wearglovesor other hand coverings in order to not leave anyfingerprints. Inorganized crime, groups of criminals may collaborate on a certain project without revealing to each other their names or other personally identifiable information. The movieThe Thomas Crown Affairdepicted a fictional collaboration by people who had never previously met and did not know who had recruited them. The anonymous purchase of a gun or knife to be used in a crime helps prevent linking an abandoned weapon to the identity of the perpetrator. There are two aspects, one, giving to a large charitable organization obscures the beneficiary of a donation from the benefactor, the other is giving anonymously to obscure the benefactor both from the beneficiary and from everyone else. Anonymous charity has long been a widespread and durable moral precept of many ethical and religious systems, as well as being in practice a widespread human activity. A benefactor may not wish to establish any relationship with the beneficiary, particularly if the beneficiary is perceived as being unsavory.[8][citation needed]Benefactors may not wish to identify themselves as capable of giving. A benefactor may wish to improve the world, as long as no one knows who did it, out of modesty, wishing to avoid publicity.[9]Another reason for anonymous charity is a benefactor who does not want a charitable organization to pursue them for more donations, sometimes aggressively. Attempts at anonymity are not always met with support from society. Anonymity sometimes clashes with the policies and procedures of governments or private organizations. In the United States, disclosure of identity is required to be able tovote, though thesecret ballotprevents disclosure of individual voting patterns. Inairportsin most countries, passengers are not allowed to board flights unless they have identified themselves to airline or transportation security personnel, typically in the form of the presentation of anidentification card. On the other hand, some policies and procedures require anonymity. Stylometricidentification of anonymous authors by writing style is a potential risk, which is expected to grow as analytic techniques improve and computing power andtext corporagrow. Authors may resist such identification by practicingadversarial stylometry.[10] When it is necessary to refer to someone who is anonymous, it is typically necessary to create a type of pseudo-identification for that person. In literature, the most common way to state that the identity of an author is unknown is to refer to them as simply "Anonymous". This is usually the case with older texts in which the author is long dead and unable to claim authorship of a work. When the work claims to be that of some famous author thepseudonymousauthor is identified as "Pseudo-", as inPseudo-Dionysius the Areopagite, an author claiming—and long believed—to beDionysius the Areopagite, an early Christian convert. Anonymus, in itsLatinspelling, generally with a specific city designation, is traditionally used by scholars in the humanities to refer to an ancient writer whose name is not known, or to a manuscript of their work. Many such writers have left valuable historical or literary records: an incomplete list of suchAnonymiis atAnonymus. In thehistory of art, many painting workshops can be identified by their characteristic style and discussed and the workshop's output set in chronological order. Sometimes archival research later identifies the name, as when the "Master of Flémalle"—defined by three paintings in the Städelsches Kunstinstitut inFrankfurt— was identified asRobert Campin. The 20th-century art historianBernard Berensonmethodically identified numerous early Renaissance Florentine and Sienese workshops under suchsobriquetsas "Amico di Sandro" for an anonymous painter in the immediate circle ofSandro Botticelli. In legal cases, a popularly accepted name to use when it is determined that an individual needs to maintain anonymity is "John Doe". This name is often modified to "Jane Doe" when the anonymity-seeker is female. The same names are also commonly used when the identification of a dead person is not known. The semi-acronym Unsub is used as law enforcement slang for "Unknown Subject of an Investigation". Themilitaryoften feels a need to honor the remains of soldiers for whom identification is impossible. In many countries, such a memorial is named theTomb of the Unknown Soldier. Most modern newspapers and magazines attribute their articles to individual editors, or tonews agencies. An exception is the Markker weeklyThe Economist.AllBritish newspapersrun their leaders, oreditorials, anonymously.The Economistfully adopts this policy, saying "Many hands writeThe Economist, but it speaks with a collective voice".[11]Guardian considers that"people will often speak more honestly if they are allowed to speak anonymously".[12][13]According to Ross Eaman, in his bookThe A to Z of Journalism, until the mid-19th century, most writers in Great Britain, especially the less well known, did not sign their names to their work in newspapers, magazines and reviews.[14] Most commentary on the Internet is essentially done anonymously, using unidentifiable pseudonyms. However, this has been widely discredited in a study by the University of Birmingham, which found that the number of people who use the internet anonymously is statistically the same as the number of people who use the internet to interact with friends or known contacts. While these usernames can take on an identity of their own, they are sometimes separated and anonymous from the actual author. According to the University of Stockholm this is creating more freedom of expression, and less accountability.[15]Wikipediais collaboratively written mostly by authors using either unidentifiable pseudonyms orIP addressidentifiers, although many Wikipedia editors use their real names instead of pseudonyms. However, the Internet was not designed for anonymity:IP addressesserve as virtual mailing addresses, which means that any time any resource on the Internet is accessed, it is accessed from a particular IP address, and the data traffic patterns to and from IP addresses can be intercepted, monitored, and analysed, even if the content of that traffic is encrypted. This address can be mapped to a particularInternet Service Provider(ISP), and this ISP can then provide information about what customer that IP address was leased to. This does not necessarily implicate a specific individual (because other people could be using that customer's connection, especially if the customer is a public resource, such as a library), but it provides regional information and serves as powerfulcircumstantial evidence.[citation needed] Anonymizing services such asI2PandToraddress the issue of IP tracking. In short, they work by encrypting packets within multiple layers of encryption. The packet follows a predetermined route through the anonymizing network. Each router sees the immediate previous router as the origin and the immediate next router as the destination. Thus, no router ever knows both the true origin and destination of the packet. This makes these services more secure than centralized anonymizing services (where a central point of knowledge exists).[16] Sites such asChatroulette,Omegle, andTinder(which pair up random users for a conversation) capitalized on a fascination with anonymity. Apps likeYik Yak,SecretandWhisperlet people share things anonymously or quasi-anonymously whereasRandomlet the user to explore the web anonymously. Some email providers, likeTutaalso offer the ability to create anonymous email accounts which do not require any personal information from the account holder.[17]Other sites, however, includingFacebookandGoogle+, ask users to sign in with their legal names. In the case of Google+, this requirement led to a controversy known as thenymwars.[18] The prevalence ofcyberbullyingis often attributed to relative Internet anonymity, due to the fact that potential offenders are able to mask their identities and prevent themselves from being caught. A principal in a high school stated that comments made on these anonymous sites are "especially vicious and hurtful since there is no way to trace their source and it can be disseminated widely.[19]"Cyberbullying, as opposed to general bullying, is still a widely-debated area ofInternet freedomin several states.[20] Though Internet anonymity can provide a harmful environment through which people can hurt others, anonymity can allow for a much safer and relaxed internet experience. In a study conducted at Carnegie Mellon University, 15 out of 44 participants stated that they choose to be anonymous online because of a prior negative experience during which they did not maintain an anonymous presence.[21]Such experiences include stalking, releasing private information by an opposing school political group, or tricking an individual into traveling to another country for a job that did not exist. Participants in this study stated that they were able to avoid their previous problems by using false identification online.[citation needed] David Chaumis called the Godfathers of anonymity and he has a claim to be one of the great visionaries of contemporary science. In the early 1980s, while a computer scientist at Berkeley, Chaum predicted the world in which computer networks would make mass surveillance a possibility. As Dr. Joss Wright explains: "David Chaum was very ahead of his time. He predicted in the early 1980s concerns that would arise on the internet 15 or 20 years later."[22]There are some people though that consider anonymity in the Internet as a danger for our society as a whole. David Davenport, an assistant professor in the Computer Engineering Department of Bilkent University in Ankara, Turkey, considers that by allowing anonymous Net communication, the fabric of our society is at risk.[23]"Accountability requires those responsible for any misconduct be identified and brought to justice. However, if people remain anonymous, by definition, they cannot be identified, making it impossible to hold them accountable." he says.[24] AsA. Michael Froomkinsays: "The regulation of anonymous and pseudonymous communications promises to be one of the most important and contentious Internet-related issues of the next decade".[25][26]Anonymity and pseudonymity can be used for good and bad purposes. And anonymity can in many cases be desirable for one person and not desirable for another person. A company may, for example, not like an employee to divulge information about improper practices within the company, but society as a whole may find it important that such improper practices are publicly exposed. Good purposes of anonymity and pseudonymity:[citation needed] There has always, however, also been a negative side of anonymity: The border between illegal and legal but offensive use is not very sharp, and varies depending on the law in each country.[32] Anonymous(used as a mass noun) is a loosely associated international network of activist andhacktivistentities. A website nominally associated with the group describes it as "an internet gathering" with "a very loose and decentralized command structure that operates on ideas rather than directives".[33]The group became known for a series of well-publicized publicity stunts and distributeddenial-of-service (DDoS)attacks on government, religious, and corporate websites. An image commonly associated with Anonymous is the "man without a head" represents leaderless organization and anonymity.[34] Anonymity is perceived as a right by many, especially the anonymity in the internet communications. The partial right for anonymity is legally protected to various degrees in different jurisdictions. The tradition of anonymous speech is older than the United States. FoundersAlexander Hamilton,James Madison, andJohn JaywroteThe Federalist Papersunder the pseudonym "Publius" and "the Federal Farmer" spoke up in rebuttal. TheUS Supreme Courthas repeatedly[35][36][37]recognized rights to speak anonymously derived from theFirst Amendment. The pressure on anonymous communication has grown substantially after the2001 terrorist attackon theWorld Trade Centerand the subsequent new political climate. Although it is still difficult to oversee their exact implications, measures such as theUS Patriot Act, the European Cybercrime Convention and theEuropean Unionrules ondata retentionare only few of the signs that the exercise of the right to the anonymous exchange of information is under substantial pressure.[41] An above-mentioned 1995 Supreme Court ruling inMcIntyre v. Ohio Elections Commissionreads:[42]"(...) protections for anonymous speech are vital to democratic discourse. Allowing dissenters to shield their identities frees them to express critical minority views . . . Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society." However, anonymous online speech is not without limits. It is clearly demonstrated in a case from 2008, one in which the defendant stated on a law-school discussion board that two women should be raped, an anonymous poster's comments may extend beyond free speech protections.[43]In the case, a Connecticut federal court must apply a standard to decide whether the poster's identity should be revealed. There are several tests, however, that the court could apply when considering this issue.[44][45] The right to internet anonymity is also covered by European legislation that recognizes the fundamental right todata protection,freedom of expression, freedom of impression. TheEuropean Union Charter of Fundamental Rightsrecognizes in Article. 8 (Title II: "Freedoms")[46]the right of everyone to protection of personal data concerning him.[47]The right to privacy is now essentially the individual's right to have and to maintain control over information about him. One of the most controversial international legal acts, regarding this subject isAnti-Counterfeiting Trade Agreement (ACTA). As of February 2015, the treaty was signed -but not all ratified- by 31 states as well as the European Union. Japan was on 4 October 2012 the first to ratify the treaty. It creates an international regime for imposing civil and criminal penalties on Internet counterfeiting and copyright infringement. Although ACTA is intentionally vague, leaving signatories to draw precise rules themselves, critics say it could mean innocent travellers having their laptops searched for unlicensed music, or being jailed for carrying ageneric drug. Infringers could be liable for the total loss of potential sales (implying that everyone who buys a counterfeit product would have bought the real thing). It applies to unintentional use of copyright material. It puts the onus on website owners to ensure they comply with laws across several territories. It has been negotiated secretively and outside established international trade bodies, despite EU criticisms.[48] The history of anonymous expression in political dissent is both long and with important effect, as in theLetters of JuniusorVoltaire'sCandide, or scurrilous as inpasquinades. In the tradition of anonymous British political criticism,The Federalist Paperswere anonymously authored by three of America'sFounding Fathers. Without the public discourse on the controversial contents of theU.S. Constitution, ratification would likely have taken much longer as individuals worked through the issues. TheUnited States Declaration of Independence, however, was not anonymous. If it had been unsigned, it might well have been less effective.John Perry Barlow,Joichi Ito, and other U.S.bloggersexpress a very strong support for anonymous editing as one of the basic requirements ofopen politicsas conducted on the Internet.[49] Anonymity is directly related to the concept ofobscurantismorpseudonymity, where an artist or a group attempts to remain anonymous, for various reasons such as adding an element of mystique to themselves or their work, attempting to avoid what is known as the "cult of personality" orhero worship(in which thecharisma, good looks, wealth or other unrelated or mildly related aspects of the people is the main reason for interest in their work, rather than the work itself) or to break into a field or area of interest normally dominated by males (as by the famousscience fictionauthorJames Tiptree, Jrwho was actually a woman named Alice Bradley Sheldon, and likelyJT LeRoy). Some seem to want to avoid the "limelight" of popularity and to live private lives, such asThomas Pynchon,J. D. Salinger,De Onbekende Beeldhouwer(an anonymous sculptor whose exhibited work inAmsterdamattracted strong attention in the 1980s and 1990s[50]), and by DJ duoDaft Punk(1993-2021). For street artistBanksy, "anonymity is vital to him because graffiti is illegal".[51] Anonymity has been used in music by avant-garde ensembleThe Residents,Jandek(until 2004), costumed comedy rock bandThe Radioactive Chicken Heads, and DJsDeadmau5(1998–present) andMarshmello(2015–present). This is frequently applied in fiction, fromThe Lone Ranger,Superman, andBatman, where a hidden identity is assumed. Suppose that onlyAlice, Bob, and Carolhave keys to a bank safe and that, one day, contents of the safe go missing (lock not violated). Without additional information, we cannot know for sure whether it was Alice, Bob or Carol who emptied the safe. Notably, each element in {Alice, Bob, Carol} could be the perpetrator with a probability of 1. However, as long as none of them was convicted with 100% certainty, we must hold that the perpetrator remains anonymous and that the attribution of the probability of 1 to one of the players has to remain undecided. If Carol has a definite alibi at the time of perpetration, then we may deduce that it must have been either Alice or Bob who emptied the safe. In this particular case, the perpetrator is not completely anonymous anymore, as both Alice and Bob now know "who did it" with a probability of 1.
https://en.wikipedia.org/wiki/Anonymity
Ananonymous post, is an entry on atextboard, anonymousbulletin board system, or other discussion forums likeInternet forum, without ascreen nameor more commonly by using a non-identifiablepseudonym. Some online forums such asSlashdotdo not allow such posts, requiring users to be registered either under theirreal nameor utilizing apseudonym. Others likeJuicyCampus,AutoAdmit,2channel, and otherFutaba-basedimageboards(such as4chan) thrive on anonymity. Users of 4chan, in particular, interact in an anonymous and ephemeral environment that facilitates rapid generation of new trends. Online anonymity can be traced toUsenetnewsgroupsin the late 1990s where the notion of using invalid emails for posting to newsgroups was introduced. This was primarily used for discussion on newsgroups pertaining to certain sensitive topics. There was also the introduction ofanonymous remailerswhich were capable of stripping away the sender's address from mail packets before sending them to the receiver. Online services which facilitated anonymous posting sprang up around mid-1992, originating with thecypherpunkgroup.[1] The precursor to Internet forums like2channeland4chanweretextboardslike Ayashii World and Amezou World that provided the ability for anonymous posts inJapan. These "large-scale anonymous textboards" were inspired by the Usenet culture and were primarily focused on technology, unlike their descendants.[2] Today, image boards receive tremendous Internet traffic from all parts of the world. In 2011, on 4chan's most popular board, /b/, there were roughly 35,000 threads and 400,000 posts created per day. At that time, that level of content was on par withYouTube. Such high traffic suggests a broad demand from Internet users for anonymous content sharing sites.[3] Anonymity on the Internet can pertain to both the utilization ofpseudonymsor requiring no authentication at all (also called "perfect anonymity") for posting on a website.[4]Online anonymity is also limited byIP addresses. For example,WikiScannerassociates anonymousWikipediaedits with the IP address that made the change and tries to identify the entity that owns the IP address. On other websites, IP addresses may not be publicly available, but they can be obtained from the website administrators only through legal intervention. They might not always be traceable to the poster.[5] Utilizingpseudonymsallow people to post without revealing their real identity. Pseudonyms, however, are still prone to being tracked to the user'sIP address.[6]To avoid being tracked to an IP address, it is possible to post via apublic computerwhere the IP address would usually be under the purview of the public workspace such as acoffee shop, and hence cannot be traced to the individual user.[6]Adversarial stylometrycan be employed to resist identification by writing style. Another way people are posting anonymously online is through the use ofmemes. One popular meme is the Confession Bear meme. People use Confession Bear to post everything from funny and embarrassing stories to very troubled thoughts.[7] There are services described asanonymizerswhich aim to provide users the ability to post anonymously by hiding their identifying information. Anonymizers are essentiallyproxy serverswhich act as an intermediary between the user who wants to post anonymously and the website which logs user information such as IP addresses. The proxy server is the only computer in this network which is aware of the user's information and provides its own information to anonymize the poster.[8]Examples of such anonymizers includeTorandI2P, which employ techniques such asonionandgarlic routing(respectively) to provide enhancedencryptionto messages that travel through multiple proxy servers.[6] Applications likePGPutilizing techniques likeprivate-keyandpublic-keyencryptions are also utilized by users to post content in Usenet groups and other online forums.[9] The revised draft of theChinesegovernment's "Internet Information Services"[10]proposes that "Internet information service providers, includingmicroblogs, forums, and blogs, that allow users to post information on the Internet should ensure users are registered with their real identities".[11]Starting October 1, 2017, it will require Internet users to identify themselves with their real names to use comments sections on news and social media websites.[12] ThePhilippinegovernment passed theCybercrime Prevention Acton 12 September 2012, which among other things grants theDepartment of Justicethe ability to "block access to 'computer data' that is in violation of the Act; in other words, a website hosting criminallylibelousspeech could be shut down without a court order".[13] Under theDefamation Act 2013, in an action against a website operator, on a statement posted on the website, it is a defense to show that it was not the operator who posted the statement on the website. The defense is defeated if it was not possible for the claimant to identify the person who posted the statement. In the United States, the right to speak anonymously online is protected by theFirst Amendmentand variousother laws. These laws restrict the ability of the government and civil litigants to obtain the identity of anonymous speakers. The First Amendment says that "Congress shall make no law ... abridging the freedom of speech, or of thepress".[14]This protection has been interpreted by theU.S. Supreme Courtto protect the right to speak anonymously offline. For example, inMcIntyre v. Ohio Elections Commission, the Supreme Court overturned an Ohio law banning the distribution of anonymous election pamphlets, claiming that an "author's decision to remain anonymous ... is an aspect of the freedom of speech protected by the First Amendment" and that "anonymouspamphleteeringis not a pernicious, fraudulent practice, but an honorable tradition ofadvocacyand ofdissent", as well as a "shield" against the so-calledtyranny of the majority.[15]Various courts have interpreted these offline protections to extend to the online world.[16] Identifying the author of an anonymous post may require aDoe subpoena. This involves gaining access to the IP address of the poster via the hosting website. The courts can then order anISPto identify the subscriber to whom it had assigned said IP address. Requests for such data are almost always fruitful, though providers will often effect a finite term ofdata retention(in accordance with theprivacy policyof each—local law may specify a minimum and/or maximum term). The usage of IP addresses has, in recent times, been challenged as a legitimate way to identify anonymous users.[17][18] On March 21, 2012, theNew York State Senateintroduced the bill numbered S.6779 (and A.8668) labeled as the "Internet Protection Act". It proposes the ability of awebsite administratorof a New York–based website to take down anonymous comments unless the original author of the comment agrees to identify themselves on the post.[19] Online communities vary with their stances on anonymous postings.Wikipediaallows anonymous editing in most cases, but does not label users, instead identifying them by theirIP addresses. Other editors commonly refer to these users with neutral terms such as "anons" or "IPs".[20] Many online bulletin boards require users to be signed in to write—and, in some cases, even to read—posts.2channeland otherFutaba-based image boards take an opposite stance, encouraging the anonymity, and in the case of English-language Futaba-based websites, calling those who useusernamesandtripcodes"namefags" and "tripfags", respectively.[21]As required by law, even communities such as 4chan do require the logging of IP addresses of such anonymous posters.[citation needed]Such data, however, can only be accessed by the particular site administrator. Slashdotdiscourages anonymous posting by displaying "Anonymous Coward" as the author of each anonymous post. The mildly derogatory term is meant to chide anonymous contributors into logging in.[22][23] The effects of posting online anonymously has been linked to theonline disinhibition effectin users whilst been categorized into either benign or toxic disinhibition.[24]Disinhibition can result in misbehavior but can also improve user relationships. It may also result in greater disclosure among Internet users, allowing more emotional closeness and openness in a safe social context.[25] Anonymous computer communication has also been linked to accentuateself-stereotyping.[26]Although it has been linked to notable effects in gender differences, only when the topic bears similarity and fits with thegender stereotype.[26] A 2015 study suggested that anonymous news comment sections are more susceptible to uncivil comments, especially those directed at other users. Anonymous news comment section users are also more likely to be impolite by either being sarcastic and casting aspersions.[27] With regard to a recent hostile subpoena in California, commentators have asked if there will be a "Layfield & Barrett effect" chilling job review posting free speech.[28][29]On May 2, 2016, through its lawyers,Layfield and Barrettand partner Phil Layfield issued a subpoena onGlassdoorseeking the online identities of former employees who posted extremely critical and negative reviews. Glassdoor executives have stated that they will fight the subpoena as they have fought off other efforts to disclose anonymous identities in the recent past.[30]Other litigants in California have won their right to anonymously post negative job reviews but the law remains hotly contested.[31][32] The conditions fordeindividuation, such as "anonymity, reducedself-awareness, and reduced self-regulation," fosters creations of online communities much in the same way that they might be employed offline.[33]This is evident in proliferation of communities such asRedditor4chanwhich utilize total anonymity or pseudonymity, or tools such as Informers (which add anonymity to non anonymous social media likeFacebookorTwitter), to provide its users the ability to post varied content. The effect ofdisinhibitionhas been seen to be beneficial in "advice and discussionthreadsby providing a cover for more intimate and open conversations".[3] The "ephemerality", or short-lived nature, of posts that exist on some anonymous image boards such as4chancreate a fast-paced environment. As of 2009, threads on 4chan had a median lifespan of 3.9 minutes.[3] There is also research suggesting that content that gets posted in such communities also tends to be more deviant in nature than would be otherwise.[34]The ability to post anonymously has also been linked to the proliferation ofpornographyin newsgroups and other online forums wherein users utilize sophisticated mechanisms such as mentioned intechnology.[9]
https://en.wikipedia.org/wiki/Anonymous_post
Ananonymous remaileris aserverthat receives messages with embedded instructions on where to send them next, and that forwards them without revealing where they originally came from. There arecypherpunk anonymous remailers,mixmaster anonymous remailers, andnym servers, among others, which differ in how they work, in the policies they adopt, and in the type of attack on the anonymity of e-mail they can (or are intended to) resist.Remailingas discussed in this article applies to e-mails intended for particular recipients, not the general public. Anonymity in the latter case is more easily addressed by using any of several methods of anonymous publication. There are several strategies that affect the anonymity of the handled e-mail. In general, different classes of anonymous remailers differ with regard to the choices their designers/operators have made. These choices can be influenced by the legal ramifications of operating specific types of remailers.[1] It must be understood that everydata packettraveling on theInternetcontains the node addresses (as rawIPbit strings) of both the sending and intended recipient nodes, and so no data packet caneveractually be anonymous at this level[citation needed]. In addition, all standards-based e-mail messages contain defined fields in their headers in which the source and transmitting entities (and Internet nodes as well) are required to be included. Some remailers change both types of address in messages they forward, and the list of forwarding nodes in e-mail messages as well, as the message passes through; in effect, they substitute 'fake source addresses' for the originals. The 'IP source address' for that packet may become that of the remailer server itself, and within an e-mail message (which is usually several packets), a nominal 'user' on that server. Some remailers forward their anonymized e-mail to still other remailers, and only after several such hops is the e-mail actually delivered to the intended address. There are, more or less, four types of remailers: Apseudonymous remailersimply takes away the e-mail address of the sender, gives a pseudonym to the sender, and sends the message to the intended recipient (that can be answered via that remailer).[2] ACypherpunk remailersends the message to the recipient, stripping away the sender address on it. One can not answer a message sent via a Cypherpunk remailer. The message sent to the remailer can usually be encrypted, and the remailer will decrypt it and send it to the recipient address hidden inside the encrypted message. In addition, it is possible to chain two or three remailers, so that each remailer can't know who is sending a message to whom. Cypherpunk remailers do not keep logs of transactions. InMixmaster, the user composes an email to a remailer, which is relayed through each node in the network usingSMTP, until it finally arrives at the final recipient. Mixmaster can only send emails one way. An email is sent anonymously to an individual, but for them to be able to respond, a reply address must be included in the body of the email. Also, Mixmaster remailers require the use of a computer program to write messages. Such programs are not supplied as a standard part of most operating systems or mail management systems. AMixminionremailer attempts to address the following challenges in Mixmaster remailers: replies, forward anonymity, replay prevention and key rotation, exit policies, integrated directory servers and dummy traffic. They are currently available for the Linux and Windows platforms. Some implementations are open source. Some remailers establish an internal list of actual senders and invented names such that a recipient can send mail toinvented nameATsome-remailer.example. When receiving traffic addressed to this user, the server software consults that list, and forwards the mail to the original sender, thus permitting anonymous—though traceable with access to the list—two-way communication. The famous "penet.fi" remailer in Finland did just that for several years.[3]Because of the existence of such lists in this type of remailing server, it is possible to break the anonymity by gaining access to the list(s), by breaking into the computer, asking a court (or merely the police in some places) to order that the anonymity be broken, and/or bribing an attendant. This happened to penet.fi as a result of some traffic passed through it aboutScientology.[citation needed]The Church claimed copyright infringement and sued penet.fi's operator. A court ordered the list be made available. Penet's operator shut it down after destroying its records (including the list) to retainidentityconfidentialityfor its users; though not before being forced to supply the court with the real e-mail addresses of two of its users.[citation needed] More recent remailer designs usecryptographyin an attempt to provide more or less the same service, but without so much risk of loss of user confidentiality. These are generally termednym serversorpseudonymous remailers. The degree to which they remain vulnerable to forced disclosure (by courts or police) is and will remain unclear since new statutes/regulations and newcryptanalyticdevelopments proceed apace. Multiple anonymous forwarding among cooperating remailers in different jurisdictions may retain, but cannot guarantee, anonymity against a determined attempt by one or more governments, or civil litigators. If users accept the loss of two-way interaction, identity anonymity can be made more secure. By not keeping any list of users and corresponding anonymizing labels for them, a remailer can ensure that any message that has been forwarded leaves no internal information behind that can later be used to break identity confidentiality. However, while being handled, messages remain vulnerable within the server (e.g., toTrojansoftware in a compromised server, to a compromised server operator, or to mis-administration of the server), andtraffic analysiscomparison of traffic into and out of such a server can suggest quite a lot—far more than almost any would credit. TheMixmasterstrategy is designed to defeat such attacks, or at least to increase their cost (i.e., to 'attackers') beyond feasibility. If every message is passed through several servers (ideally in different legal and political jurisdictions), then attacks based on legal systems become considerably more difficult, if only because of 'Clausewitzian' friction among lawyers, courts, different statutes, organizational rivalries, legal systems, etc. And, since many different servers and server operators are involved, subversion of any (i.e., of either system or operator) becomes less effective also since no one (most likely) will be able to subvert the entire chain of remailers. Randompaddingof messages, random delays before forwarding, and encryption of forwarding information between forwarding remailers, increases the degree of difficulty for attackers still further as message size and timing can be largely eliminated as traffic analysis clues, and lack of easily readable forwarding information renders ineffective simple automated traffic analysis algorithms. There are also web services that allow users to send anonymous email messages. These services do not provide the anonymity of real remailers, but they are easier to use. When using a web-based anonymous email or anonymous remailer service, its reputation should first be analyzed, since the service stands between senders and recipients. Some of the aforementioned web services log the usersIP addressesto ensure they do not break the law; others offer superior anonymity with attachment functionality by choosing to trust that the users will not breach the websites terms of service (ToS).[4] In most cases, remailers are owned and operated by individuals, and are not as stable as they might ideally be. In fact, remailers can, and have, gone down without warning. It is important to use up-to-date statistics when choosing remailers. Although most re-mailer systems are used responsibly, the anonymity they provide can be exploited by entities or individuals whose reasons for anonymity are not necessarily benign.[5] Such reasons could include support for violent extremist actions,[citation needed]sexual exploitation of children[citation needed]or more commonly to frustrate accountability for 'trolling' and harassment of targeted individuals, or companies (The Dizum.com re-mailer chain being abused as recently as May 2013[citation needed]for this purpose.) The response of some re-mailers to this abuse potential is often to disclaim responsibility (as dizum.com does[6]), as owing to the technical design (and ethical principles) of many systems, it is impossible for the operators to physically unmask those using their systems. Some re-mailer systems go further and claim that it would be illegal for them to monitor for certain types abuse at all.[6] Until technical changes were made in the remailers concerned in the mid-2000s, some re-mailers (notably nym.alias.net based systems) were seemingly willing to use any genuine (and thus valid) but otherwise forged address. This loophole allowed trolls to mis-attribute controversial claims or statements with the aim of causing offence, upset or harassment to the genuine holder(s) of the address(es) forged. While re-mailers may disclaim responsibility, the comments posted via them have led to them being blocked in some countries. In 2014, dizum.com (aNetherlands-based remailer) was seemingly blocked by authorities in Pakistan,[citation needed]because comments an (anonymous) user of that service had made concerning key figures in Islam.
https://en.wikipedia.org/wiki/Anonymous_remailer
Bugō(武号,Japanese:[bɯgoː])arenicknamesused in theJapanese martial arts. The word is composed of the symbols武(bu, meaning "martial") and号(gō, meaning "name"). In English, the term is sometimes translated as "martial name" or "warrior name"[1][2]with similar equivalents in other languages.[3] AsJames George Frazerdemonstrated inThe Golden Bough, using someone's real name is ataboocommon to many countries throughout history, and to circumvent this taboo,pseudonymsare often used.[4]For example, in Japan, the word fortrue name(諱, imina) is derived from忌み+名(also imina), meaning "name to be avoided due to death or other taboos": after death, people are given posthumous names (諡, okurina) to avoid "calling" them via their true name.[5] In China'sSouthern Songperiod,Neo-Confucianismcombined concepts ofreclusion,self-denialand self-effacinghumilityfromConfucianism,TaoismandBuddhism, and these thoughts found fertile ground in Japan.[6]The practice of実名敬避俗mjitsumei keihizoku, the avoidance of real names, became fashionable and even de rigueur amongst the educated classes--literati(ja:文人) poets, artists and monks, as well as courtiers.[7]In modern Japan, it is common practice to call people by their titles instead of their names (even within the family),[8][9]and online, Japanese people tend to usehandlesrather than personal names (see alsoJapanese names).[10] During theEdo period, Japanese people, including commoners, used multiple names.[11]Samurai nameschanged throughout one's lifetime, depending on stage of life (e.g.coming of age), through titles associated with official positions, allegiance, and finay with Buddhistnecronymsafter death (q.v.Kaimyō).[12]However, these are not normally referred to as Bugō unless used within a martial arts training setting (dōjōorryūha). For example,Miyamoto Musashi's various names included 藤原 Fujiwara (lineage), 宮本 Miyamoto (village origin), 新免 Shinmen (name of father's lord), 辨助 Bennosuke (childhood name), 武蔵 Musashi (title; also possibly read "Takezō" as a personal name), 玄信 ("imina", read as Harunobu, Motonobu and/or Genshin), 二天 Niten (mainly in hissuibokupaintings), 二天道楽 Niten Dōraku, etc. People still debate which of these names were really used, in what ways, and how they were read.[13] As withpatronymicpersonal names andYagō, it is common for students to include a character from the teacher's Bugō as a mark of respect and to ensure continuity of thelineage.[14]In many cases the name would not be chosen by the practitioner/student, but chosen for them by the teacher - see many examples below. Similar customs can be found outside Asia: for exampleRichard "the Lionheart",Don Quixote,Carlos the Jackal, or thering namesused by modern sports martial artists. In addition, warrior names are found amongst the indigenousKwakwakaʼwakw[15]and forest dwellers of French Guiana.[16] The Bugei Ryūha Daijiten directory of historical martial arts schools lists Bugō for many within the various lineages.[citation needed] ThegrandmastersofShin-no-shin Ishikawa-ryūalways included the character源in their Bugō to indicate their founder's descent from theMinamoto clan. Ittō-ryū's founderItō Kagehisaused the name "Ittō-sai" (一刀斎). Tenshin Shōden Katori Shintō-ryūfounder Iizasa Ienao used the name "Chōi-sai" (長威斎). Yagyū Munetoshiof theShinkage-ryūused the name "Sekishū-sai" (石舟斎). The character斎(-sai), meaning "study room", seen at the end of the three examples above is common to many martial artists of theEdo period, principally because of theJapanese four-character idiom"bunbu-ryōdō"("the pen and the sword in accord"), i.e. the link between martial arts and visual arts. Such 斎号 ("-sai names") are even now commonly used as posthumous BuddhistDharma namesfor artists or doctors.[17]Whether a given individual intended them to be used aspen namesor Bugō is not always clear. Daitō-ryū Aiki-jūjutsu's founderTakeda Sōkakuused the Bugō "Minamoto Masayoshi" (源正義).[18] His student Yamamoto Tomekichi, founder of Mugen Shintō-ryū, was granted one character from Sōkaku's birth name 惣角, and one from his Bugō 源正義, combining them to make Kakuyoshi (角義). He also had a "-sai name", Ittō-sai (一刀斎) - coincidentally the same as that of Itō Kagehisa as seen above. Furuoka Masaru, founder of Musō-ryū Iaigiri-dō, used the Bugō "Nitō-sai" (二刀斎) - another "-sai name", this time preceded with "two swords" instead of the Ittō-sai "one sword" meaning. BujinkangrandmasterMasaaki Hatsumihas used different Bugō at different stages in his life (e.g. Byakuryū, Toratsugu, Tetsuzan, Hisamune),[19]as did his teacher,Toshitsugu Takamatsu(e.g. Kikaku, Chōsui, Mōko no Tora).[20][21]Those training in this art are frequently awarded Bugō when they reach 5thdan(instructor) level. Many of the names include either the character 龍 (ryū, dragon) or 虎 (ko, tiger), both derived from past names of Hatsumi and Takamatsu (e.g. Unryū 雲龍 = Cloud Dragon,[22]Kiryū 輝龍 = Shining Dragon,[23]Hiryū 飛龍 = Flying Dragon,[24]Nanko = Southern Tiger).[25]The combination of the two, 龍虎 (Ryūko) was awarded to Major Joe Vaughan.[26]Most variants include animals (e.g. Shirokuma = Polar Bear,[27]Taka Seigi = Hawk Justice,[28]Isamu Koma 勇駒 = brave horse,[29]Byakko 白狐 = White Fox,[30]Ōzaru = Great Ape).[31] Former students of Hatsumi similarly use martial names, e.g. Fumio "Unsui" Manaka,[32]Tsunehisa 'Shōtō' Tanemura.[33]Satō Kinbei, a rather controversial figure who claimed also to have studied under Takamatsu, used the Bugō (and "-sai name") "Jūshinsai" (柔心斎) and passed this to his daughter Chizuko, who became the "2nd generation Jūshinsai".[34]Kimura Masaji, another claiming to have studied under Takamatsu, used the Bugō "Masakatsu" (正勝).[35][36]Students ofStephen K. Hayes'sTo-Shin Doare awarded warrior names on promotion to 3rd Dan, e.g. Kevin "Keitoshi" Casey.[37] TheTenshin ryūwebsite lists five instructors with Bugō, each granted to them by previous masters. Shiina Kazue, grandmaster ofHokushin Ittō-ryū, uses the Bugō "Naritane" (成胤). The character胤(-tane) is common to several generations of grandmaster in this school. Hidemine Jibiki, president of the All Japan Soft-Style Martial Arts Federation uses the Bugō "Buhō" (武峰).[38] Nakajima Shōhitsu, grandmaster ofShinkage-ryū, used the Bugō "Shōun" (勝雲). Seven of the past eight in the lineage have used the character勝(meaning "to win") in their names.[39] In the KidōkanIaidōDōjō in Osaka, new Dan grades are awarded Bugō such as 不聆庵[40]
https://en.wikipedia.org/wiki/Bug%C5%8D
Acourtesy name(Chinese:字;pinyin:zì;lit.'character'), also known as astyle name, is an additional name bestowed upon individuals at adulthood, complementing their given name.[1]This tradition is prevalent in theEast Asian cultural sphere, particularly inChina,Japan, andVietnam.[2]Courtesy names are a marker of adulthood and were historically given to men at the age of 20, and sometimes to women upon marriage. Unlikeart names, which are more akin topseudonymsorpen names, courtesy names served a formal and respectful purpose.[1]In traditional Chinese society, using someone's given name in adulthood was considered disrespectful among peers, making courtesy names essential for formal communication and writing. Courtesy names often reflect the meaning of the given name or use homophonic characters, and were typically disyllabic after theQin dynasty. The practice also extended to other East Asian cultures, and was sometimes adopted byMongolsandManchusduring theQing dynasty. The choice of a courtesy name was significant, intended to express moral integrity and respect within the cultural context. A courtesy name is a name traditionally given to Chinese men at the age of 20sui, marking theircoming of age. It was sometimes given to women, usually upon marriage.[1]The practice is no longer common in modern Chinese society. According to theBook of Rites, after a man reached adulthood, it was disrespectful for others of the same generation to address him by hisgiven name.[3]Thus, the given name was reserved for oneself and one's elders, whereas the courtesy name would be used by adults of the same generation to refer to one another on formal occasions or in writing. Another translation ofziis "style name", but this translation has been criticised as misleading, because it could imply an official or legal title.[1] Generally speaking, courtesy names before theQin dynastywere one syllable, and from the Qin to the 20th century they were mostlydisyllabic, consisting of twoChinese characters.[1]Courtesy names were often relative to the meaning of the person's given name, the relationship could be synonyms, relative affairs, or rarely but sometimes antonym. For example,Chiang Kai-shek's given name (中正,romanizedas Chung-cheng) and courtesy name (介石, romanized as Kai-shek) are both from theyù(豫) hexagram 16 ofI Ching.[4] Another way to form a courtesy name is to use the homophonic characterzi(子) – a respectful title for a man – as the first character of the disyllabic courtesy name. Thus, for example,Gongsun Qiao's courtesy name was Zichan (子產), andDu Fu's was Zimei (子美). It was also common to construct a courtesy name by using as the first character one which expresses the bearer's birth order among male siblings in his family. ThusConfucius, whose name was Kong Qiu (孔丘), was given the courtesy name Zhongni (仲尼), where the first characterzhongindicates that he was the second son born into his family. The characters commonly used arebo(伯) for the first,zhong(仲) for the second,shu(叔) for the third, andji(季) typically for the youngest, if the family consists of more than three sons. GeneralSun Jian's four sons, for instance, wereSun Ce(伯符, Bófú),Sun Quan(仲謀, Zhòngmóu),Sun Yi(叔弼, Shūbì) andSun Kuang(季佐, Jìzuǒ).[5] Reflecting a general cultural tendency toregard names as significant, the choice of what name to bestow upon one's children was considered very important in traditional China.[6]Yan Zhituiof theNorthern Qidynasty asserted that whereas the purpose of a given name was to distinguish one person from another, a courtesy name should express the bearer's moral integrity.[citation needed] Prior to the twentieth century,sinicizedKoreans,Vietnamese, andJapanesewere also referred to by their courtesy name. The practice was also adopted by someMongolsandManchusafter the Qing conquest of China.[citation needed]
https://en.wikipedia.org/wiki/Courtesy_name
Acode name,codename,call sign, orcryptonymis acode wordor name used, sometimes clandestinely, to refer to another name, word, project, or person. Code names are often used for military purposes, or in espionage. They may also be used inindustrial counter-espionageto protect secret projects and the like from business rivals, or to give names to projects whose marketing name has not yet been determined. Another reason for the use of names and phrases in the military is that they transmit with a lower level of cumulative errors over awalkie-talkieor radio link than actual names. TheAchaemenid EmpireunderDarius Iemployed a network of spies called the King’s Eye or the King’s Ear.[1][2]These agents operated under anonymity, and “King’s Eye” was not a specific person but rather a code name for the intelligence network that reported directly to the king.[2] TheCarthaginiangeneralHannibal Barcareportedly used coded references for his agents and informants in Rome and among allied territories.[3]Some sources suggest that key figures in his intelligence operations were identified using nicknames instead of real names to avoid detection by Roman counterintelligence.[3] Julius Caesarusedciphersto encode messages and likely employed code names for key operatives.[4]His famousCaesar cipher(simple letter-shiftingencryption) was used to disguise military commands.[4]He also referred toMarc Antonyand other generals with shortened or altered names in correspondence to prevent interception from revealing strategic plans.[4] During theJewish revolts against Rome, leaders and messengers used symbolic or misleading names in communications.[5][6]The Dead Sea Scrolls reference figures such as the “Teacher of Righteousness” and the “Wicked Priest,” which may have functioned as code names to obscure real identities.[5][6] TheByzantine Empire’sintelligence agents, particularly underEmperor Justinian I, operated under codenames or titles rather than real identities.[7]Procopiussuggests that spies within the Persian and Gothic courts were assigned allegorical names to protect them from discovery.[7] DuringWorld War I, names common to theAlliesreferring to nations, cities, geographical features, military units, military operations, diplomatic meetings, places, and individual persons were agreed upon, adapting pre-war naming procedures in use by the governments concerned. In the British case names were administered and controlled by the Inter Services Security Board (ISSB) staffed by theWar Office.[8]This procedure was coordinated with the United States when itentered the war. Random lists of names were issued to users in alphabetical blocks of ten words and were selected as required. Words became available for re-use after six months and unused allocations could be reassigned at discretion and according to need. Judicious selection from the available allocation could result in clever meanings and result in anaptronymorbackronym, although policy was to select words that had no obviously deducible connection with what they were supposed to be concealing. Those for the majorconference meetingshad a partial naming sequence referring to devices or instruments which had a number as part of their meaning, e.g., the third meeting was "TRIDENT".Joseph Stalin, whose last name means "man of steel", was given the name "GLYPTIC", meaning "an image carved out of stone". Ewen Montagu, a British Naval intelligence officer, discloses inBeyond Top Secret Ultrathat duringWorld War II,Nazi Germanyhabitually usedad hoccode names as nicknames which often openly revealed or strongly hinted at their content or function. Some German code names: Conversely,Operation Wacht am Rhein(Watch on theRhine) was deliberately named to suggest the opposite of its purpose – a defensive "watch" as opposed to a massiveblitzkriegoperation, just as wasOperation Weserübung(Weser-exercise), which signified the plans to invadeNorwayandDenmarkin April 1940. Britain and the United States developed the security policy of assigning code names intended to give no such clues to the uninitiated. For example, the British counter measures against theV-2was calledOperation Crossbow. Theatomic bombproject centered inNew Mexicowas called theManhattan Project, derived from theManhattan Engineer Districtwhich managed the program. The code name for the AmericanA-12/SR-71spy plane project, producing the fastest, highest-flying aircraft in the world, wasOxcart. The American group that planned that country's firstICBMwas called theTeapot Committee. Although the word could stand for a menace to shipping (in this case, that of Japan), the American code name for the attack on the subtropical island ofOkinawain World War II wasOperation Iceberg. The Soviet Union's project to base missiles in Cuba was namedOperation Anadyrafter their closest bomber base to the US (just across the Bering Strait from Nome, Alaska). The names of colors are generally avoided in American practice to avoid confusion with meteorological reporting practices. Britain, in contrast, made deliberately non-meaningful use of them, through the system ofrainbow codes. Although German and Italian aircraft were not given code names by their Allied opponents, in 1942, Captain Frank T. McCoy, an intelligence officer of theUSAAF, invented a system for the identification of Japanese military aircraft. Initially using short, "hillbilly" boys' names such as "Pete", "Jake", and "Rufe", the system was later extended to include girls' names and names of trees and birds, and became widely used by the Allies throughout thePacific theaterof war. This type of naming scheme differs from the other use of code names in that it does not have to be kept secret, but is a means of identification where the official nomenclature is unknown or uncertain. The policy of recognition reporting names was continued into theCold Warfor Soviet, otherWarsaw Pact, and Communist Chinese aircraft. Although this was started by the Air Standards Co-ordinating Committee (ASCC) formed by the United States, United Kingdom, Canada, Australia, and New Zealand, it was extended throughoutNATOas theNATO reporting namefor aircraft, rockets and missiles. These names were considered by the Soviets as being like a nickname given to one's unit by the opponents in a battle. The Soviets did not like theSukhoi Su-25getting the code name "Frogfoot".[citation needed]However, some names were appropriate, such as "Condor" for theAntonov An-124, or, most famously, "Fulcrum" for theMikoyan MiG-29, which had a "pivotal" role in Soviet air-strategy. Code names were adopted by the following process. Aerial or space reconnaissance would note a new aircraft at aWarsaw Pactairbase. The intelligence units would then assign it a code name consisting of the official abbreviation of the base, then a letter, for example, "Ram-A", signifying an aircraft sighted atRamenskoye Airport. Missiles were given designations like "TT-5", for the fifth rocket seen atTyura-Tam. When more information resulted in knowing a bit about what a missile was used for, it would be given a designation like "SS-6", for the sixth surface-to-surface missile design reported. Finally, when either an aircraft or a missile was able to be photographed with a hand-held camera, instead of a reconnaissance aircraft, it was given a name like "Flanker" or "Scud" – always an English word, as international pilots worldwide are required to learn English. The Soviet manufacturer or designation – which may be mistakenly inferred by NATO – has nothing to do with it. Jet-powered aircraft received two-syllable names likeFoxbat, while propeller aircraft were designated with short names likeBull. Fighter names began with an "F", bombers with a "B", cargo aircraft with a "C". Training aircraft and reconnaissance aircraft were grouped under the word "miscellaneous", and received "M". The same convention applies to missiles, with air-launched ground attack missiles beginning with the letter "K" and surface-to-surface missiles (ranging fromintercontinental ballistic missilestoantitankrockets) with the letter "S", air-to-air missiles "A", and surface-to-air missiles "G". Throughout the Second World War, the British allocation practice favored one-word code names (Jubilee,Frankton). That of the Americans favored longer compound words, although the nameOverlordwas personally chosen byWinston Churchillhimself. Many examples of both types can be cited, as can exceptions. Winston Churchill was particular about the quality of code names. He insisted that code words, especially for dangerous operations, would be not overly grand nor petty nor common. One emotional goal he mentions is to never have to report to anyone that their son "was killed in an operation called 'Bunnyhug' or 'Ballyhoo'."[12] Presently, British forces tend to use one-word names, presumably in keeping with their post-World War II policy of reserving single words for operations and two-word names for exercises. British operation code names are usually randomly generated by a computer and rarely reveal its components or any political implications unlike the American names (e.g., the2003 invasion of Iraqwas called "Operation Telic" compared to Americans' "Operation Iraqi Freedom", obviously chosen for propaganda rather than secrecy). Americans prefer two-word names, whereas the Canadians and Australians use either. The French military currently prefer names drawn from nature (such as colors or the names of animals), for instanceOpération Daguet("brocket deer") orOpération Baliste("Triggerfish"). The CIA uses alphabetical prefixes to designate the part of the agency supporting an operation. In many cases with the United States, the first word of the name has to do with the intent of the program. Programs with "have" as the first word, such asHave Bluefor the stealth fighter development, are developmental programs, not meant to produce a production aircraft. Programs that start with Senior, such as Senior Trend for the F-117, are for aircraft in testing meant to enter production.[citation needed] In the United States code names are commonly set entirely in upper case.[13]This is not done in other countries, though for the UK in British documents the code name is in upper case while operation is shortened to OP e.g., "Op. TELIC". This presents an opportunity for a bit of public-relations (Operation Just Cause), or for controversy over the naming choice (Operation Infinite Justice, renamedOperation Enduring Freedom). Computers are now used to aid in the selection. And further, there is a distinction between thesecretnames during former wars and thepublishednames of recent ones. Aproject code nameis a code name (usually a single word, short phrase or acronym) which is given to aprojectbeing developed byindustry,academia, government, and other concerns. Project code names are typically used for several reasons: Different organizations have different policies regarding the use and publication of project code names. Some companies take great pains toneverdiscuss or disclose project code names outside of the company (other than with outside entities who have a need to know, and typically are bound with anon-disclosure agreement). Other companies never use them in official or formal communications, but widely disseminate project code names through informal channels (often in an attempt to create amarketing buzzfor the project). Still others (such asMicrosoft) discuss code names publicly, and routinely use project code names on beta releases and such, but remove them from final product(s). In the case of Windows 95, the code name "CHICAGO" was left embedded in theINF File structureand remained required through Windows Me. At the other end of the spectrum,Appleincludes the project code names forMac OS Xas part of the official name of the final product, a practice that was started in 2002 withMac OS X v10.2"Jaguar". Google and theAOSPalso used this for theirAndroidoperating system until 2013, where the code name was different from the release name.
https://en.wikipedia.org/wiki/Code_name
Adata haven, like acorporate havenortax haven, is arefugefor uninterrupted or unregulateddata.[1][2][3]Data havens are locations withlegal environmentsthat are friendly to the concept of acomputer networkfreely holding data and even protecting its content and associated information. They tend to fit into three categories: a physicallocalitywith weak information-system enforcement andextraditionlaws, a physical locality with intentionally strong protections of data, andvirtualdomains designed to secure data via technical means (such asencryption) regardless of any legal environment. Tor'sonion space,I2P(both hidden services),HavenCo(centralized), andFreenet(decentralized) are four models of modern-day virtual data havens. Reasons for establishing data havens include access tofree political speechfor users in countries wherecensorshipof theInternetis practiced. Other reasons can include: The 1978 report of the British government's Data Protection Committee expressed concern that differentprivacystandards in different countries would lead to the transfer of personal data to countries with weaker protections; it feared that Britain might become a "data haven".[4]Also in 1978, Adrian Norman published a mock consulting study on the feasibility of setting up a company providing a wide range of data haven services, called "Project Goldfish".[5] Science fiction novelistWilliam Gibsonused the term in his novelsCount ZeroandMona Lisa Overdrive, as did Bruce Sterling inIslands in the Net. The 1990s segments ofNeal Stephenson's 1999 novelCryptonomiconconcern a small group of entrepreneurs attempting to create a data haven.
https://en.wikipedia.org/wiki/Data_haven
Theliteraryconcept of theheteronymrefers to one or more imaginary character(s) created by a writer to write in different styles. Heteronyms differ frompen names(orpseudonyms, from the Greek words for "false" and "name") in that the latter are just false names, while the former are characters that have their own supposed physiques, biographies, and writing styles.[1] Heteronyms were named and developed by thePortuguesewriter and poetFernando Pessoain the early 20th century, but they were thoroughly explored by the Danish philosopherKierkegaardin the 19th century and have also been used by other writers. In Pessoa's case, there are at least 70 heteronyms (according to the latest count by Pessoa's editor Teresa Rita Lopes). Some of them are relatives or know each other; they criticise and translate each other's works. Pessoa's three chief heteronyms areAlberto Caeiro,Ricardo ReisandÁlvaro de Campos; the latter two consider the former their master. There are also two whom Pessoa calledsemi-heteronyms,Bernardo Soaresand the Baron of Teive, who are semi-autobiographical characters who write in prose, "a mere mutilation" of the Pessoa personality. There is, lastly, anorthonym,Fernando Pessoa, the namesake of the author, who also considers Caeiro his master. The heteronyms dialogue with each other and even with Pessoa in what he calls "the theatre of being" or "drama in people". They sometimes intervened in Pessoa's social life: during Pessoa's only attested romance, a jealous Campos wrote letters to the girl, who enjoyed the game and wrote back. Pessoa, also an amateur astrologer, created in 1915 the heteronym Raphael Baldaya, a long bearded astrologer. He elaborated horoscopes of his main heteronyms in order to determine their personalities. Fernando Pessoa on the heteronyms How do I write in the name of these three? Caeiro, through sheer and unexpected inspiration, without knowing or even suspecting that I'm going to write in his name. Ricardo Reis, after an abstract meditation, which suddenly takes concrete shape in an ode. Campos, when I feel a sudden impulse to write and don't know what. (My semi-heteronym Bernardo Soares, who in many ways resembles Álvaro de Campos, always appears when I'm sleepy or drowsy, so that my qualities of inhibition and rational thought are suspended; his prose is an endless reverie. He's a semi-heteronym because his personality, although not my own, doesn't differ from my own but is a mere mutilation of it. He's me without my rationalism and emotions. His prose is the same as mine, except for certain formal restraint that reason imposes on my own writing, and his Portuguese is exactly the same – whereas Caeiro writes bad Portuguese, Campos writes it reasonably well but with mistakes such as "me myself" instead of "I myself", etc.., and Reis writes better than I, but with a purism I find excessive...) George Steiner on the heteronyms Pseudonymous writing is not rare in literature or philosophy (Kierkegaard provides a celebrated instance). 'Heteronyms', as Pessoa called and defined them, are something different and exceedingly strange. For each of his 'voices', Pessoa conceived a highly distinctive poetic idiom and technique, a complex biography, a context of literary influence and polemics and, most arrestingly of all, subtle interrelations and reciprocities of awareness. Octavio Paz defines Caeiro as 'everything that Pessoa is not and more'. He is a man magnificently at home in nature, a virtuoso of pre-Christian innocence, almost a Portuguese teacher of Zen. Reis is a stoic Horatian, a pagan believer in fate, a player with classical myths less original than Caeiro, but more representative of modern symbolism. De Campos emerges as a Whitmanesque futurist, a dreamer in drunkenness, the Dionysian singer of what is oceanic and windswept in Lisbon. None of this triad resembles the metaphysical solitude, the sense of being an occultist medium which characterise Pessoa's 'own' intimate verse. Richard Zenith on the heteronyms Álvaro de Campos, the poet-persona who grew old with Pessoa and held a privileged place in his inventor's hearts. Soares, the assistant bookkeeper and Campos, the naval engineer never met in the pen-and-paper drama of Pessoa's heteronyms, who were frequently pitted against one other, but the two writer-characters were spiritual brothers, even if their worldly occupations were at odds. Campos wrote prose, as well as poetry, and much of it reads at it came, so to speak, from the hand of Soares. Pessoa was often unsure who was writing when he wrote, and it's curious that the very first item among the more than 25,000 pieces that make up his archives in the National Library of Lisbon bears the headingA. de C. (?) or B. de D. (or something else). This heteronym was created byFernando Pessoaas analter egowho inherited his role fromAlexander Searchand this one fromCharles Robert Anon. The latter was created when Pessoa lived inDurban, while Search was created in 1906, when Pessoa was a student at Lisbon's University, in search of his Portuguese cultural identity, after his return fromDurban. Anon was supposedly English, while Search, although English, was born in Lisbon. After the Portuguese republican revolution, in 1910, and consequent patriotic atmosphere, Pessoa dropped his English heteronyms andÁlvaro de Camposwas created as a Portuguesealter ego. Álvaro de Campos, born in 1890, was supposedly a Portuguese naval engineer graduated inGlasgow. Campos sailed to the Orient, living experiences that he describes in his poem "Opiarium". He worked in London (1915), Barrow on Furness and Newcastle (1922), but became unemployed, and returned to Lisbon in 1926, the year of the military putsch that installed dictatorship. He also wrote "Lisbon Revisited (1923)" and "Lisbon Revisited (1926)". Campos was adecadentpoet, but he embracedFuturism; his poetry was strongly influenced byWalt WhitmanandMarinetti. He wrote the "Ode Triumphal" and "Ode Maritime", published in the literary journalOrpheu, in 1915, and other unfinished. While unemployed in Lisbon, he became depressed, returning toDecadentismandPessimism. Then he wrote his master work, "Tobacco Shop", published in 1933, in the literary journalPresença. Pessoa created this heteronym as "Master" of the other heteronyms and even Pessoa himself. This fictional character was born in 1889 and died in 1915, at 26, almost the same age as Pessoa's best friendMário de Sá-Carneiro, who killed himself in Paris in 1916 less than a month shy of his 26th birthday. Thus, Sá-Carneiro seems to have inspired, at least partially, Alberto Caeiro. Caeiro was a humble man of poor education, but a great poet "naif", he was born in Lisbon, but lives almost his life in the countryside, Ribatejo, near Lisbon, where he died. However, his poetry is full of philosophy. He wrote "Poemas Inconjuntos" (Disconnected Poems) and "O Guardador de Rebanhos" (The Keeper of Sheep), published byFernando Pessoain his "Art Journal"Athenain 1924–25. In a famous letter to the literary critic Adolfo Casais Monteiro, dated January 13, 1935, Pessoa describes his "triumphal day", March 8, 1914, when Caeiro "appeared", making him write down all the poetry of "The Keeper of Sheep" at once. Caeiro influenced theNeopaganismof Pessoa, and of the heteronyms António Mora and Ricardo Reis. Poetically, he influenced mainly the Neoclassicism of Reis, which is connected to Paganism. This heteronym was created byPessoaas a Portuguese doctor born inPorto, on September 19, 1887. Reis supposedly studied at a boarding school run byJesuitsin which he received a classical education. He was an amateurlatinistand poet; politically a monarchist, he went into exile to Brazil after the defeat of a monarchical rebellion against thePortuguese Republicin 1919. Ricardo Reis reveals hisEpicureanismandStoicismin the "Odes by Ricardo Reis", published byPessoain 1924, in his literary journalAthena. SincePessoadidn't determine the death of Reis, one can assume that he survived his author who died in 1935. InThe Year of the Death of Ricardo Reis(1984), PortugueseNobel prizewinnerJosé Saramagorebuilds, in his own personal outlook, the literary world of this heteronym after 1935, creating a dialog between Ricardo Reis and the ghost of his author. See the introductory parts in:
https://en.wikipedia.org/wiki/Heteronym_(literature)
Ahorse nameis a secondarynobletitle or a popularnamefor members ofEthiopianroyalty; in some cases the "horse names" are the only name known for a ruler. They take the form of "father of X", where "X" is the name of the person'swarhorse. Some known horse names of Ethiopian nobility include: This equine-related article is astub. You can help Wikipedia byexpanding it. ThisEthiopianroyalty–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Horse_name
Ahypocorism(/haɪˈpɒkərɪzəm/hy-POK-ər-iz-əmor/ˌhaɪpəˈkɒrɪzəm/HY-pə-KORR-iz-əm; fromAncient Greekὑποκόρισμαhypokórisma; sometimes alsohypocoristic), orpet name, is a name used to show affection for a person.[1][2]It may be adiminutiveform of a person's name, such asIzzyfor Isabel orBobfor Robert, or it may be unrelated. Etymologically, the termhypocorismis from Ancient Greekὑποκόρισμα(hypokórisma), fromὑποκορίζεσθαι(hypokorízesthai), meaning 'to call by endearing names'. The prefixhypo-refers in this case to creating a diminutive, something that is smaller in a tender or affectionate sense; the rootkorízesthaioriginates in the Greek for 'to caress' or 'to treat with tokens of affection', and is related to the wordsκόρος(kóros) 'boy, youth' andκόρη(kórē) 'girl, young woman'. Inlinguistics, the term can be used more specifically to refer to themorphologicalprocess by which the standard form of the word is transformed into a form denotingaffection, or to words resulting from this process. In English, a word is oftenclippeddown to a closedmonosyllableand thensuffixedwith‑yor‑ie(phonologically/-i/).[3]Sometimes the suffix-ois included as well as other forms[4][5][6]or templates.[7] This name-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Hypocorism
John Doe(male) andJane Doe(female) aremultiple-useplaceholder namesthat are used in the British, Canadian, and American legal system when the true name of a person is unknown or is being intentionally concealed.[1][2][3]In the context oflaw enforcement in the United States, such names are often used to refer to acorpse whose identity is unknown or cannot be confirmed. These names are also often used to refer to a hypothetical "everyman" in other contexts, likeJohn Q. Publicor "Joe Public". There are many variants to the above names, includingJohn(orRichard)/Jane Roe,John/Jane Smith,John/Jane Bloggs, andJohnie/Janie Doeor justBaby Doefor children.A. N. Otheris also a placeholder name, mainly used in the United Kingdom – which is gender neutral – alongsideJoe/Jo Bloggsand the now occasional use of the "John" and "Jane Doe" names. In otherEnglish-speaking countries, unique placeholder names, numbers orcodenameshave become more often used in the context of police investigations. This has included theUnited Kingdom, where usage of "John Doe" originated during theMiddle Ages. However, the legal termJohn Doe injunctionorJohn Doe order[4]has survived inEnglish lawand other legal systems influenced by it. Other names, such as "Joe Bloggs" or "John Smith", have sometimes been informally used as placeholders for an every-man in the UK, Australia, and New Zealand; however such names are seldom used in legal or police circles in the same sense as John Doe. Well-known legal cases named after placeholders include: Under thelegal terminology of Ancient Rome, the names "Numerius Negidius" and "Aulus Agerius" were used in relation to hypotheticaldefendantsandplaintiffs.[7]The names "John Doe" (or "John Doo") and "Richard Roe" (along with "John Roe") were regularly invoked in English legal instruments to satisfy technical requirements governing standing and jurisdiction, beginning perhaps as early as the reign of England's KingEdward III(1327–1377).[8]Though the rationale behind the choices of Doe and Roe is unknown, there are many suggestedfolk etymologies.[9]Other fictitious names for a person involved in litigation in medieval English law were "John Noakes" (or "Nokes") and "John-a-Stiles" (or "John Stiles").[10]TheOxford English Dictionarystates that John Doe is "the name given to thefictitiouslessee of the plaintiff, in the (now obsolete in the UK) mixed action ofejectment, the fictitious defendant being called Richard Roe".[9] This usage is mocked in the 1834 English song "John Doe and Richard Roe": Two giants live in Britain's land,John Doe and Richard Roe,Who always travel hand in hand,John Doe and Richard Roe.Their fee-faw-fum's an ancient planTo smell the purse of an Englishman,And, 'ecod, they'll suck it all they can,John Doe and Richard Roe ...[11] This particular use became obsolete in the UK in 1852: As is well known, the device of involving real people as notional lessees and ejectors was used to enable freeholders to sue the real ejectors. These were then replaced by the fictional characters John Doe and Richard Roe. Eventually the medieval remedies were (mostly) abolished by the Real Property Limitation Act of 1833; the fictional characters of John Doe and Richard Roe by the Common Law Procedure Act 1852; and the forms of action themselves by the Judicature Acts 1873–75."Secretary of State for Environment, Food, and Rural Affairs v Meier and others (2009).[12] In the UK, usage of "John Doe" survives mainly in the form of John Doe injunction or John Doe order (see above). 8.02 If an unknown person has possession of the confidential personal information and is threatening to disclose it, a 'John Doe' injunction may be sought against that person. The first time this form of injunction was used since 1852 in the United Kingdom was in 2005 when lawyers acting for JK Rowling and her publishers obtained an interim order against an unidentified person who had offered to sell chapters of a stolen copy of an unpublished Harry Potter novel to the media.[13] Unlike the United States, the name "John Doe" does not actually appear in the formal name of the case, for example:X & Y v Persons Unknown[2007] HRLR 4.[14]Well-known cases of unidentified decedents include "Caledonia Jane Doe" (1979), "Princess Doe" (1982) and "Walker County Jane Doe" (1980), all of whom have been identified. In 1997, New York City police discovered a decapitated body and were not able to find the killer. The body was namedPeachesand also Jane Doe 3. The baby victim in a 2001 murder case inKansas City, Missouri, was referred to asPrecious Doe.[15] In 2009, theNew York Timesreported the difficulties and unwanted attention experienced by a man actually named John Doe, who had often been suspected of using apseudonym. He had been questioned repeatedly by airport security staff. Another man named John Doe was often suspected of being an incognito celebrity.[16] In cases where a large number of unidentified individuals are mentioned, numbers may be appended, such as "Doe #2" or "Doe II".Operation Delego(2009), which targeted an international child sexual abuse ring, cited 21 numbered "John Does", as well as other people known by the surnames "Doe", "Roe", "Hoe" and "Poe". "John Stiles", "Richard Miles" have been used for the third and fourth participants in an action. "Mary Major" has been used in somefederal cases in the US.[17]"James Doe" and "Judy Doe" are among other common variants. Less often, other surnames ending in-oehave been used when more than two unknown or unidentified persons are named in U.S. court proceedings,e.g., InMassachusetts, "Mary Moe" is used to refer to pregnant women under the age of 18 petitioning the Superior Court for ajudicial bypassexception to the parental consent requirement for abortion.[21]"Mary Moe" is also used to refer to such cases generally, i.e. "Mary Moe cases". Sometimes "Mary Doe" may be used for the individuals. Parallels in other countries include: Currently there are no court rules about pseudonym use. The rules of civil procedure ... are silent on the matter ... Rule of Civil Procedure 10(a) reads, '... In the complaint, the title of the action shall include the names of all the parties ...' The rule contains no guidance as to what parties should do to keep their names confidential.[30] Prior to ... 1969, only one Supreme Court case, three court of appeals' decisions, and one district court decision in the previous quarter-century featured an anonymous individual as the sole or lead plaintiff. Between 1969 and 22 January 1973, the date when the Supreme Court decidedRoeandDoe, there were twenty-one district court and two court of appeals decisions featuring anonymous plaintiffs.[31]
https://en.wikipedia.org/wiki/John_Doe
The Latinisation of names in thevernacularwas a procedure deemed necessary for the sake of conformity by scribes and authors when incorporating references to such persons in Latin texts. The procedure was used in the era of theRoman RepublicandEmpire. It was used continuously by thePapacyfrom the earliest times, in religious tracts and in diplomatic and legal documents. It was used by the early Europeanmonasteries. Following theNorman Conquest of England, it was used by theAnglo-Normanclerics and scribes when drawing up charters. Its use was revived in theRenaissancewhen the new learning was written down in Latin and drew much on the work of Greek, Arabic and other non-Latin ancient authors. ContemporaryItalianandEuropeanscholars also needed to be Latinised to be quoted in such treatises. The different eras produced their own styles and peculiarities. Sophistication was the trademark of the Renaissance Latinisers. The Anglo-Norman scribes on the other hand were not so learned, and often simply translated the vernacular name into Latin words based on similar sounds, without much effort to make sense or to avoid absurdity, which produced some strange results due to the complexity. In central European circles of academia and ecclesial writers, a specific practice of Latinisation arose during the 15th century with the rediscovery of ancient literature. Thereby writers would seek connection to the ancient writers by taking up surnames or international pen names. We encounter names that follow naming conventions of those ancient languages, especially Latin and Greek, so the occasional Greek names for the same function are also included here. Especially in the German-speaking regions the use of a “Humanistenname” or “Gelehrtenname” was common for many an academic, cleric, and secular administrative who wished to ascend in societal rank. The other region where the practice became equally common was 1600s Scandinavia and the Swedish Baltic colonies where this practice was called 'lärda namn' or 'humanistnamn'.[10]Further reasons for assuming such internationally recognisable names, especially in Scandinavia, included leaving agrarian conditions behind and embracing an urban and cosmopolitan way of life.[11]Some academics never had a surname nor a patronymic surname as per their region of origin. However, academics came to Central European universities from all corners of Europe, with surnames from rare languages, so clarity in distinguishing students was necessary. Some Latinizations and Grecizations are exact vernacular translations of profession surnames or dwelling names, but others seem to bear no known connection or resemblance. Humanist names reached varying degrees of stability and heritability, and some exist to this day.[12][circular reference][13][circular reference] Recent articles and dissertation by Daniel Kroiß have systematically categorized the origin of Humanist names and their declension patterns in the German and Dutch speaking regions. Some humanist names derived from common professions as replacements of the vernacular term, and were found throughout Central European university cities. They included: Some humanist surnames that were not clearly based on profession or location included: The Complete Peerage(1913) states concerning the Latinization of English names:[15]"When a clerk had to render a name in a charter he usually sought for the nearest Latin equivalent, sometimes took a correct one, as "de Bello Campo" for "Beauchamp"; sometimes a grotesque one". The latter refers to the mediaeval Anglo-Norman family ofOrescuilz, which held amongst others the Somersetshire manor ofSandford Orcas(named after it),[16]whose surname was Latinised asde Aureis Testiculis,[17][18]from French "Couilles[19]d'Or". A list of "Latin forms of English surnames" is included as an appendix in Andrew Wright'sCourt Hand Restored, or the Student's Assistant in reading Old Deeds, Charters, Records, etc.,[20]published in 9 editions up to 1879. In 1910 Charles Trice Martin expanded on Wright's list (the 9th edition of which he had edited) in hisThe Record Interpreter: a collection of abbreviations, Latin words and names used in English historical manuscripts and recordswhich included a chapter "Latin forms of English Surnames".[21]He acknowledged[22]in compiling his list the assistance of an anonymous workThe Norman People and their Existing Descendants(London, 1874).[23]In the preface, p. xi, Martin stated of that chapter: "Many of the [place names and] surnames have been found in classes of records which contain documents in both languages referring to the same case, like the Chancery Proceedings, in which bills and answers are in English and writs in Latin." Martin stated that some of the Latin names were "due to the ingenuity" of officials and clerks inserting what they thought would be a translation of an English name, being ignorant of its real meaning and history. This led to spurious translations such asVentus Morbidus(literally "sick wind") for the place name 'Windsor', andde Umbrosa Quercu(literally "from the shady oak") for the surname 'Dimock'. He went on to say that the list includes many names collected from Latin inscriptions on brasses, tombstones, and other monuments, many of them dating to the sixteenth century and later, and said that he had supplied the English equivalents of these from other sources of information. One of the most abundant sources of Latinized names is inbiological taxonomic nomenclature, particularlybinomial nomenclature. Many thousands of species are named after individuals, chiefly but not exclusively scientists. This most often involves, in principle, creating a Latinized equivalent of the name in question. In some cases this will involve a traditional latinization; for example, thegrey penduline tit,Anthoscopus caroli, derives its specific name from the genitive of the traditional Latin formCarolusfor the first name of the Swedish explorerKarl Johan Andersson. In most cases, the names are "one-off" Latinized forms produced by adding the genitive endings-iior-ifor a man,-aefor a woman, or-orumin plural, to a family name, thereby creating a Latinized form. For example, a name such asMacrochelys temminckiinotionally represents a latinization of the family name ofCoenraad Jacob Temminckto "Temminckius." Another example,Acisoma attenboroughi, Latinizes the name of SirDavid Attenboroughas if "Attenboroughus."
https://en.wikipedia.org/wiki/List_of_Latinised_names
This is alist ofpseudonyms, in various categories. A pseudonym is a name adopted by a person for a particular purpose, which differs from their true name. A pseudonym may be used by socialactivistsorpoliticiansfor political purposes or by others forreligiouspurposes. It may be asoldier'snom de guerreor anauthor'snom de plume. It may be a performer'sstage nameor an alias used byvisual artists,athletes,fashion designers, orcriminals. Pseudonyms are occasionally used infictionsuch as bysuperheroesor other fictional characters.
https://en.wikipedia.org/wiki/List_of_pseudonyms