text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Epistemologyis the branch ofphilosophythat examines the nature, origin, and limits ofknowledge. Also called "thetheory of knowledge", it explores different types of knowledge, such aspropositional knowledgeabout facts,practical knowledgein the form of skills, andknowledge by acquaintanceas a familiarity through experience. Epistemologists study the concepts ofbelief,truth, andjustificationto understand the nature of knowledge. To discover how knowledge arises, they investigate sources of justification, such asperception,introspection,memory,reason, andtestimony.
The school ofskepticismquestions the human ability to attain knowledge whilefallibilismsays that knowledge is never certain.Empiricistshold that all knowledge comes from sense experience, whereasrationalistsbelieve that some knowledge does not depend on it.Coherentistsargue that a belief is justified if it coheres with other beliefs.Foundationalists, by contrast, maintain that the justification of basic beliefs does not depend on other beliefs.Internalism and externalismdebate whether justification is determined solely bymental statesor also by external circumstances.
Separate branches of epistemology focus on knowledge in specific fields, like scientific, mathematical, moral, and religious knowledge.Naturalized epistemologyrelies on empirical methods and discoveries, whereasformal epistemologyuses formal tools fromlogic.Social epistemologyinvestigates the communal aspect of knowledge, andhistorical epistemologyexamines its historical conditions. Epistemology is closely related topsychology, which describes the beliefs people hold, while epistemology studies the norms governing the evaluation of beliefs. It also intersects with fields such asdecision theory,education, andanthropology.
Early reflections on the nature, sources, and scope of knowledge are found inancient Greek,Indian, andChinese philosophy. The relation between reason andfaithwas a central topic in themedieval period. Themodern erawas characterized by the contrasting perspectives of empiricism and rationalism. Epistemologists in the 20th century examined the components, structure, and value of knowledge while integrating insights from thenatural sciencesandlinguistics.
Epistemology is the philosophical study ofknowledgeand related concepts, such asjustification. Also calledtheory of knowledge,[a]it examinesthe natureand types of knowledge. It further investigates the sources of knowledge, likeperception,inference, andtestimony, to understand how knowledge is created. Another set of questions concerns the extent and limits of knowledge, addressing what people can and cannot know.[2]Central concepts in epistemology includebelief,truth,evidence, andreason.[3]As one of the main branches of philosophy, epistemology stands alongside fields likeethics,logic, andmetaphysics.[4]The term can also refer specific positions of philosophers within this branch, as inPlato's epistemology andImmanuel Kant's epistemology.[5]
Epistemology explores how people should acquire beliefs. It determines which beliefs or forms of belief acquisition meet the standards or epistemic goals of knowledge and which ones fail, thereby providing an evaluation of beliefs. The fields ofpsychologyandcognitive sociologyare also interested in beliefs and related cognitive processes, but examine them from a different perspective. Unlike epistemology, they study the beliefs people actually have and how people acquire them instead of examining the evaluative norms of these processes.[6]In this regard, epistemology is anormativediscipline,[b]whereas psychology and cognitive sociology are descriptive disciplines.[8][c]Epistemology is relevant to many descriptive and normative disciplines, such as the other branches of philosophy and the sciences, by exploring the principles of how they may arrive at knowledge.[11]
The wordepistemologycomes from theancient Greektermsἐπιστήμη(episteme, meaningknowledgeorunderstanding) andλόγος(logos, meaningstudy oforreason),literally, the study of knowledge. Despite its ancient roots, the word itself was only coined in the 19th century to designate this field as a distinct branch of philosophy.[12][d]
Epistemologists examine several foundational concepts to understand their essences and rely on them to formulate theories. Various epistemological disagreements have their roots in disputes about the nature and function of these concepts, like the controversies surrounding the definition of knowledge and the role ofjustificationin it.[17]
Knowledge is an awareness, familiarity, understanding, or skill. Its various forms all involve a cognitive success through which a person establishes epistemic contact with reality.[18]Epistemologists typically understand knowledge as an aspect of individuals, generally as a cognitivemental statethat helps them understand, interpret, and interact with the world. While this core sense is of particular interest to epistemologists, the term also has other meanings. For example, the epistemology of groups examines knowledge as a characteristic of a group of people who share ideas.[19]The term can also refer toinformationstored in documents and computers.[20]
Knowledge contrasts withignorance, often simply defined as the absence of knowledge. Knowledge is usually accompanied by ignorance because people rarely have complete knowledge of a field, forcing them to rely on incomplete or uncertain information when making decisions.[21]Even though many forms of ignorance can be mitigated through education and research, certain limits to human understanding result in inevitable ignorance.[22]Some limitations are inherent in the humancognitive facultiesthemselves, such as the inability to know facts too complex for thehuman mindto conceive.[23]Others depend on external circumstances when no access to the relevant information exists.[24]
Epistemologists disagree on how much people know, for example, whether fallible beliefs can amount to knowledge or whether absolute certainty is required. The most stringent position is taken byradical skeptics, who argue that there is no knowledge at all.[25]
Epistemologists distinguish between different types of knowledge.[27]Their primary interest is in knowledge of facts, calledpropositional knowledge.[28]It istheoreticalknowledge that can be expressed indeclarative sentencesusing a that-clause, like "Ravi knows that kangaroos hop". For this reason, it is also calledknowledge-that.[29][e]Epistemologists often understand it as arelationbetween a knower and a knownproposition, in the case above between the person Ravi and the proposition "kangaroos hop".[30]It is use-independent since it is not tied to one specific purpose, unlike practical knowledge. It is a mental representation that embodies concepts and ideas to reflect reality.[31]Because of its theoretical nature, it is typically held that only creatures with highly developed minds, such as humans, possess propositional knowledge.[32]
Propositional knowledge contrasts with non-propositional knowledge in the form ofknowledge-howandknowledge by acquaintance.[33]Knowledge-how is a practical ability or skill, like knowing how to read or how to preparelasagna.[34]It is usually tied to a specific goal and not mastered in the abstract without concrete practice.[35]To know something by acquaintance means to have an immediate familiarity with or awareness of it, usually as a result of direct experiential contact. Examples are "familiarity with the city ofPerth", "knowing the taste oftsampa", and "knowingMarta Vieira da Silvapersonally".[36]
Another influential distinction in epistemology is betweena posteriorianda prioriknowledge.[38][f]A posterioriknowledge is knowledge ofempiricalfacts based on sensory experience, like "seeing that the sun is shining" and "smelling that a piece of meat has gone bad".[40]This type of knowledge is associated with the empirical science and everyday affairs.A prioriknowledge, by contrast, pertains to non-empirical facts and does not depend on evidence from sensory experience, like knowing that2+2=4{\displaystyle 2+2=4}. It belongs to fields such asmathematicsandlogic.[41]The distinction betweena posteriorianda prioriknowledge is central to the debate betweenempiricistsandrationalistsregarding whether all knowledge depends on sensory experience.[42]
A closely related contrast is betweenanalytic and synthetic truths. A sentence is analytically true if its truth depends only on the meanings of the words it uses. For instance, the sentence "all bachelors are unmarried" is analytically true because the word "bachelor" already includes the meaning "unmarried". A sentence is synthetically true if its truth depends on additional facts. For example, the sentence "snow is white" is synthetically true because its truth depends on the color of snow in addition to the meanings of the wordssnowandwhite.A prioriknowledge is primarily associated with analytic sentences, whereasa posterioriknowledge is primarily associated with synthetic sentences. However, it is controversial whether this is true for all cases. Some philosophers, such asWillard Van Orman Quine, reject the distinction, saying that there are no analytic truths.[43]
The analysis of knowledge is the attempt to identify theessential componentsorconditions of all and onlypropositional knowledge states. According to the so-calledtraditional analysis,[g]knowledge has three components: it is a belief that isjustifiedand true.[45]In the second half of the 20th century, this view was challenged by aseries of thought experimentsaiming to show that some justified true beliefs do not amount to knowledge.[46]In one of them, a person is unaware of all thefake barnsin their area. By coincidence, they stop in front of the only real barn and form a justified true belief that it is a real barn.[47]Many epistemologists agree that this is not knowledge because the justification is not directly relevant to the truth.[48]More specifically, this and similar counterexamples involve some form of epistemic luck, that is, a cognitive success that results from fortuitous circumstances rather than competence.[49]
Following thesethought experiments, philosophers proposed various alternative definitions of knowledge by modifying or expanding the traditional analysis.[50]According to one view, the known fact has to cause the belief in the right way.[51]Another theory states that the belief is the product of a reliable belief formation process.[52]Further approaches require that the person would not have the belief if it was false,[53]that the belief is not inferred from a falsehood,[54]that the justification cannot beundermined,[55]or that the belief isinfallible.[56]There is no consensus on which of the proposed modifications and reconceptualizations is correct.[57]Some philosophers, such asTimothy Williamson, reject the basic assumption underlying the analysis of knowledge by arguing thatpropositional knowledge is a unique statethat cannot be dissected into simpler components.[58]
The value of knowledge is the worth it holds by expanding understanding and guiding action. Knowledge can haveinstrumental valueby helping a person achieve their goals.[59]For example, knowledge of a disease helps a doctor cure their patient.[60]The usefulness of a known fact depends on the circumstances. Knowledge of some facts may have little to no uses, like memorizing random phone numbers from an outdated phone book.[61]Being able to assess the value of knowledge matters in choosing what information to acquire and share. It affects decisions like which subjects to teach at school and how to allocate funds to research projects.[62]
Epistemologists are particularly interested in whether knowledge is more valuable than a mere true opinion.[63]Knowledge and true opinion often have a similar usefulness since both accurately represent reality. For example, if a person wants to go toLarissa, a true opinion about the directions can guide them as effectively as knowledge.[64]Considering this problem, Plato proposed that knowledge is better because it is more stable.[65]Another suggestion focuses onpractical reasoning, arguing that people put more trust in knowledge than in mere true opinions when drawing conclusions and deciding what to do.[66]A different response says that knowledge has intrinsic value in addition to instrumental value. This view asserts that knowledge is always valuable, whereas true opinion is only valuable in circumstances where it is useful.[67]
Beliefs are mental states about what is the case, like believing that snow is white or thatGod exists.[68]In epistemology, they are often understood as subjectiveattitudes that affirm or deny a proposition, which can be expressed in adeclarative sentence. For instance, to believe that snow is white is to affirm the proposition "snow is white". According to this view, beliefs are representations of what the universe is like. They are stored in memory and retrieved when actively thinking about reality or deciding how to act.[69]A different view understands beliefs as behavioral patterns ordispositionsto act rather than as representational items stored in the mind. According to this perspective, to believe that there is mineral water in the fridge is nothing more than a group of dispositions related to mineral water and the fridge. Examples are the dispositions to answer questions about the presence of mineral water affirmatively and to go to the fridge when thirsty.[70]Some theorists deny the existence of beliefs, saying that this concept borrowed fromfolk psychologyoversimplifies much more complex psychological or neurological processes.[71]Beliefs are central to various epistemological debates, which cover their status as a component of propositional knowledge, the question of whether people havecontrol over and responsibility for their beliefs, and the issue of whether beliefs have degrees, calledcredences.[72]
As propositional attitudes, beliefs are true or false depending on whether they affirm a true or a false proposition.[73]According to thecorrespondence theory of truth, to be true means to stand in the right relation to the world by accurately describing what it is like. This means that truth is objective: a belief is true if it corresponds to afact.[74]Thecoherence theory of truthsays that a belief is true if it belongs to a coherent system of beliefs. A result of this view is that truth is relative since it depends on other beliefs.[75]Furthertheories of truthincludepragmatist,semantic,pluralist, anddeflationary theories.[76]Truth plays a central role in epistemology as a goal of cognitive processes and an attribute of propositional knowledge.[77]
In epistemology, justification is a property of beliefs that meet certain norms about what a person should believe.[78]According to a common view, this means that the person has sufficient reasons for holding this belief because they have information that supports it.[78]Another view states that a belief is justified if it is formed by a reliable belief formation process, such as perception.[79]The termsreasonable,warranted, andsupportedare sometimes used as synonyms of the wordjustified.[80]Justification distinguishes well-founded beliefs fromsuperstitionand lucky guesses.[81]However, it does not guarantee truth. For example, a person with strong but misleading evidence may form a justified belief that is false.[82]
Epistemologists often identify justification as a key component of knowledge.[83]Usually, they are not only interested in whether a person has a sufficient reason to hold a belief, known aspropositional justification, but also in whether the person holds the belief because or based on[h]this reason, known asdoxastic justification. For example, if a person has sufficient reason to believe that a neighborhood is dangerous but forms this belief based on superstition then they have propositional justification but lack doxastic justification.[85]
Sources of justification are ways or cognitive capacities through which people acquire justification. Often-discussed sources includeperception,introspection,memory,reason, andtestimony, but there is no universal agreement to what extent they all provide valid justification.[86]Perception relies onsensory organsto gain empirical information. Distinct forms of perception correspond to different physical stimuli, such asvisual,auditory,haptic,olfactory, andgustatoryperception.[87]Perception is not merely the reception of sense impressions but an active process that selects, organizes, and interpretssensory signals.[88]Introspection is a closely related process focused on internalmental statesrather than external physical objects. For example, seeing a bus at a bus station belongs to perception while feeling tired belongs to introspection.[89]
Rationalists understand reason as a source of justification for non-empirical facts, explaining how people can know about mathematical, logical, and conceptual truths. Reason is also responsible for inferential knowledge, in which one or more beliefs serve as premises to support another belief.[90]Memory depends on information provided by other sources, which it retains and recalls, like remembering a phone number perceived earlier.[91]Justification by testimony relies on information one person communicates to another person. This can happen by talking to each other but can also occur in other forms, like a letter, a newspaper, and a blog.[92]
Rationalityis closely related to justification and the termsrational beliefandjustified beliefare sometimes used interchangeably. However, rationality has a wider scope that encompasses both a theoretical side, covering beliefs, and a practical side, coveringdecisions,intentions, andactions.[93]There are different conceptions about what it means for something to be rational. According to one view, a mental state is rational if it is based on or responsive to good reasons. Another view emphasizes the role of coherence, stating that rationality requires that the different mental states of a person areconsistentand support each other.[94]A slightly different approach holds that rationality is about achieving certain goals. Two goals of theoretical rationality are accuracy and comprehensiveness, meaning that a person has as few false beliefs and as many true beliefs as possible.[95]
Epistemologists rely on the concept of epistemic norms as criteria to assess the cognitive quality of beliefs, like their justification and rationality. They distinguish between deontic norms, whichprescribewhat people should believe, andaxiologicalnorms, which identify the goals andvaluesof beliefs.[96]Epistemic norms are closely linked to intellectual orepistemic virtues, which are character traits likeopen-mindednessandconscientiousness. Epistemic virtues help individuals form true beliefs and acquire knowledge. They contrast with epistemic vices and act as foundational concepts ofvirtue epistemology.[97][i]
Epistemologists understandevidencefor a belief as information that favors or supports it. They conceptualize evidence primarily in terms of mental states, such as sensory impressions or other known propositions. But in a wider sense, it can also include physical objects, likebloodstains examined by forensic analystsor financial records studied by investigative journalists.[99]Evidence is often understood in terms ofprobability: evidence for a belief makes it more likely that the belief is true.[100]Adefeateris evidence against a belief or evidence that undermines another piece of evidence. For instance,witness testimonylinking a suspect to a crime is evidence of their guilt, while analibiis a defeater.[101]Evidentialistsanalyze justification in terms of evidence by asserting that for a belief to be justified, it needs to rest on adequate evidence.[102]
The presence of evidence usually affectsdoubtandcertainty, which are subjective attitudes toward propositions that differ regarding their level of confidence. Doubt involves questioning the validity or truth of a proposition. Certainty, by contrast, is a strong affirmative conviction, indicating an absence of doubt about the proposition's truth. Doubt and certainty are central to ancient Greek skepticism and its goal of establishing that no belief is immune to doubt. They are also crucial in attempts to find a secure foundation of all knowledge, such asRené Descartes'foundationalistepistemology.[103]
While propositional knowledge is the main topic in epistemology, some theorists focus onunderstandinginstead. Understanding is a more holistic notion that involves a wider grasp of a subject. To understand something, a person requires awareness of how different things are connected and why they are the way they are. For example, knowledge of isolated facts memorized from a textbook does not amount to understanding. According to one view, understanding is a unique epistemic good that, unlike propositional knowledge, is always intrinsically valuable.[104]Wisdomis similar in this regard and is sometimes considered the highest epistemic good. It encompasses a reflective understanding with practical applications, helping people grasp and evaluate complex situations and lead a good life.[105]
In epistemology, knowledge ascription is the act of attributing knowledge to someone, expressed in sentences like "Sarah knows that it will rain today".[106]According to invariantism, knowledge ascriptions have fixed standards across different contexts.Contextualists, by contrast, argue that knowledge ascriptions are context-dependent. From this perspective, Sarah may know about the weather in the context of an everyday conversation even though she is not sufficiently informed to know it in the context of a rigorousmeteorologicaldebate.[107]Contrastivism, another view, argues that knowledge ascriptions are comparative, meaning that to know something involves distinguishing it from relevant alternatives. For example, if a person spots a bird in the garden, they may know that it is a sparrow rather than an eagle, but they may not know that it is a sparrow rather than an indistinguishable sparrow hologram.[108]
Philosophical skepticismquestions the human ability to attain knowledge by challenging the foundations upon which knowledge claims rest. Some skeptics limit their criticism to specific domains of knowledge. For example,religious skepticssay that it is impossible to know about the existence of deities or the truth of other religious doctrines. Similarly, moral skeptics challenge the existence of moral knowledge and metaphysical skeptics say that humans cannot know ultimate reality.[109]External world skepticism questions knowledge of external facts,[110]whereasskepticism about other mindsdoubts knowledge of the mental states of others.[111]
Global skepticism is the broadest form of skepticism, asserting that there is no knowledge in any domain.[112]Inancient philosophy, this view was embraced byacademic skeptics, whereasPyrrhonian skepticsrecommended thesuspension of beliefto attaintranquility.[113]Few epistemologists have explicitly defended global skepticism. The influence of this position stems from attempts by other philosophers to show that their theory overcomes the challenge of skepticism. For example,René Descartesusedmethodological doubtto find facts that cannot be doubted.[114]
One consideration in favor of global skepticism is thedream argument. It starts from the observation that, while people are dreaming, they are usually unaware of this. This inability to distinguish between dream and regular experience is used to argue that there is no certain knowledge since a person can never be sure that they are not dreaming.[115][j]Some critics assert that global skepticism isself-refutingbecause denying the existence of knowledge is itself a knowledge claim. Another objection says that the abstract reasoning leading to skepticism is not convincing enough to overrule common sense.[117]
Fallibilism is another response to skepticism.[118]Fallibilists agree with skeptics that absolute certainty is impossible. They reject the assumption that knowledge requires absolute certainty, leading them to the conclusion that fallible knowledge exists.[119]They emphasize the need to keep an open and inquisitive mind, acknowledging that doubt can never be fully excluded, even for well-established knowledge claims like thoroughly tested scientific theories.[120]
Epistemic relativism is related to skepticism but differs in that it does not question the existence of knowledge in general. Instead, epistemic relativists only reject the notion of universal epistemic standards or absolute principles that apply equally to everyone. This means that what a person knows depends on subjective criteria or social conventions used to assess epistemic status.[121]
The debate between empiricism and rationalism centers on the origins of human knowledge. Empiricism emphasizes thatsense experienceis the primary source of all knowledge. Some empiricists illustrate this view by describing the mind as ablank slatethat only develops ideas about the external world through the sense data received from the sensory organs. According to them, the mind can attain various additional insights by comparing impressions, combining them, generalizing to form more abstract ideas, and deducing new conclusions from them. Empiricists say that all these mental operations depend on sensory material and do not function on their own.[123]
Even though rationalists usually accept sense experience as one source of knowledge,[k]they argue that certain forms of knowledge are directly accessed throughreasonwithout sense experience,[125]like knowledge of mathematical and logical truths.[126]Some forms of rationalism state that the mind possessesinborn ideas, accessible without sensory assistance. Others assert that there is an additional cognitive faculty, sometimes calledrational intuition, through which people acquire nonempirical knowledge.[127]Some rationalists limit their discussion to the origin of concepts, saying that the mind relies on inborncategoriesto understand the world and organize experience.[125]
Foundationalists and coherentists disagree about the structure of knowledge.[129][l]Foundationalism distinguishes between basic and non-basic beliefs. A belief is basic if it is justified directly, meaning that its validity does not depend on the support of other beliefs.[m]A belief is non-basic if it is justified by another belief.[133]For example, the belief that it rained last night is a non-basic belief if it is inferred from the observation that the street is wet.[134]According to foundationalism, basic beliefs are the foundation on which all other knowledge is built while non-basic beliefs act as the superstructure resting on this foundation.[133]
Coherentists reject the distinction between basic and non-basic beliefs, saying that the justification of any belief depends on other beliefs. They assert that a belief must align with other beliefs to amount to knowledge. This occurs when beliefs are consistent and support each other. According to coherentism, justification is aholisticaspect determined by the whole system of beliefs, which resembles an interconnected web.[135]
Foundherentismis an intermediary position combining elements of both foundationalism and coherentism. It accepts the distinction between basic and non-basic beliefs while asserting that the justification of non-basic beliefs depends on coherence with other beliefs.[136]
Infinitismpresents a less common alternative perspective on the structure of knowledge. It agrees with coherentism that there are no basic beliefs while rejecting the view that beliefs can support each other in acircular manner. Instead, it argues that beliefs form infinite justification chains, in which each link of the chain supports the belief following it and is supported by the belief preceding it.[137]
The disagreement between internalism and externalism is about the sources of justification.[139][n]Internalists say that justification depends only on factors within the individual, such as perceptual experience, memories, and other beliefs. This view emphasizes the importance of the cognitive perspective of the individual in the form of their mental states. It is commonly associated with the idea that the relevant factors are accessible, meaning that the individual can become aware of their reasons for holding a justified belief through introspection and reflection.[141]
Evidentialismis an influential internalist view, asserting that justification depends on the possession ofevidence.[142]In this context, evidence for a belief is any information in the individual's mind that supports the belief. For example, the perceptual experience of rain is evidence for the belief that it is raining. Evidentialists suggest various other forms of evidence, including memories, intuitions, and other beliefs.[143]According to evidentialism, a belief is justified if the individual's evidence supports it and they hold the belief on the basis of this evidence.[144]
Externalism, by contrast, asserts that at least some relevant factors of knowledge are external to the individual.[141]For instance, when considering the belief that a cup of coffee stands on the table, externalists are not primarily interested in the subjective perceptual experience that led to this belief. Instead, they focus on objective factors, like the quality of the person's eyesight, their ability to differentiate coffee from other beverages, and the circumstances under which they observed the cup.[145]A key motivation of many forms of externalism is that justification makes it more likely that a belief is true. Based on this view, justification is external to the extent that some factors contributing to this likelihood are not part of the believer's cognitive perspective.[141]
Reliabilismis an externalist theory asserting that a reliable connection between belief and truth is required for justification.[146]Some reliabilists explain this in terms of reliable processes. According to this view, a belief is justified if it is produced by a reliable process, like perception. A belief-formation process is deemed reliable if most of the beliefs it generates are true. An alternative view focuses on beliefs rather than belief-formation processes, saying that a belief is justified if it is a reliable indicator of the fact it presents. This means that the belief tracks the fact: the person believes it because it is true but would not believe it otherwise.[147]
Virtue epistemology, another type of externalism, asserts that a belief is justified if it manifests intellectual virtues. Intellectual virtues are capacities or traits that perform cognitive functions and help people form true beliefs. Suggested examples include faculties, like vision, memory, and introspection, andcharacter traits, like open-mindedness.[148]
Some branches of epistemology are characterized by their research methods.Formal epistemologyemploys formal tools from logic and mathematics to investigate the nature of knowledge.[149][o]For example,Bayesian epistemologyrepresents beliefs as degrees of certainty and usesprobability theoryto formally define norms ofrationalitygoverning how certain people should be.[151]Experimental epistemologistsbase their research on empirical evidence about common knowledge practices.[152]Applied epistemologyfocuses on the practical application of epistemological principles to diverse real-world problems, like the reliability of knowledge claims on the internet, how to assesssexual assaultallegations, and howracismmay lead toepistemic injustice.[153][p]Metaepistemologistsstudy the nature, goals, and research methods of epistemology. As ametatheory, it does not directly advocate for specific epistemological theories but examines their fundamental concepts and background assumptions.[155][q]
Particularism and generalism disagree about the rightmethod of conducting epistemological research. Particularists start their inquiry by looking at specific cases. For example, to find a definition of knowledge, they rely on their intuitions about concrete instances of knowledge and particular thought experiments. They use these observations as methodological constraints that any theory of general principles needs to follow. Generalists proceed in the opposite direction. They prioritize general epistemic principles, saying that it is not possible to accurately identify and describe specific cases without a grasp of these principles.[157]Other methods in contemporary epistemology aim to extractphilosophical insights from ordinary languageor look at the role of knowledge in making assertions and guiding actions.[158]
Phenomenologicalepistemology emphasizes the importance of first-person experience. It distinguishes between the natural and the phenomenological attitudes. The natural attitude focuses on objects belonging to common sense and natural science. The phenomenological attitude focuses on the experience of objects and aims to provide a presuppositionless description of how objects appear to the observer.[159]
Naturalized epistemologyis closely associated with thenatural sciences, relying on their methods and theories to examine knowledge. Arguing that epistemological theories should rest on empirical observation, it is critical ofa priorireasoning.[160]Evolutionary epistemologyis a naturalistic approach that understands cognition as a product ofevolution, examining knowledge and the cognitive faculties responsible for it through the lens ofnatural selection.[161]Social epistemologyfocuses on the social dimension of knowledge. While traditional epistemology is mainly interested in the knowledge possessed by individuals, social epistemology covers knowledge acquisition, transmission, and evaluation within groups, with specific emphasis on how people rely on each other when seeking knowledge.[162]
Pragmatistepistemology is a form of fallibilism that emphasizes the close relation between knowing and acting. It sees the pursuit of knowledge as an ongoing process guided by common sense and experience while always open to revision. This approach reinterprets some core epistemological notions, for example, by conceptualizing beliefs as habits that shape actions rather than representations that mirror the world.[163]Motivated by pragmatic considerations,epistemic conservatismis a view aboutbelief revision. It prioritizes pre-existing beliefs, asserting that a person should only change their beliefs if they have a good reason to. One argument for epistemic conservatism rests on the recognition that the cognitive resources of humans are limited, making it impractical to constantly reexamine every belief.[164]
Postmodernepistemology critiques the conditions of knowledge in advanced societies. This concerns in particular themetanarrativeof a constant progress of scientific knowledge leading to a universal and foundational understanding of reality.[166]Similarly,feministepistemology adopts a critical perspective, focusing on the effect ofgenderon knowledge. Among other topics, it explores how preconceptions about gender influence who has access to knowledge, how knowledge is produced, and which types of knowledge are valued in society.[167]Some postmodern and feminist thinkers adopt aconstructivistapproach, arguing that the way people view the world is not a simple reflection of external reality but a social construction. This view emphasizes the creative role of interpretation while undermining objectivity since social constructions can vary across societies.[168]Another critical approach, found in decolonial scholarship, opposes the global influence of Western knowledge systems. It seeks to undermine Western hegemony anddecolonize knowledge.[169]
The decolonial outlook is also present inAfrican epistemology. Grounded in Africanontology, it emphasizes the interconnectedness of reality as acontinuumbetween knowing subject and known object. It understands knowledge as aholisticphenomenon that includes sensory, emotional, intuitive, and rational aspects, extending beyond the limits of the physical domain.[170]
Another epistemological tradition is found in ancientIndian philosophy. Its diverseschools of thoughtexamine different sources of knowledge, calledpramāṇa.Perception, inference, andtestimonyare sources discussed by most schools. Other sources only considered by some schools arenon-perception, which leads to knowledge of absences, and presumption.[171][r]Buddhistepistemology focuses on immediate experience, understood as the presentation of uniqueparticularswithout secondary cognitive processes, like thought and desire.[173]Nyāyaepistemology is a causal theory of knowledge, understanding sources of knowledge as reliable processes that cause episodes of truthful awareness. It sees perception as the primary source of knowledge and emphasizes its importance for successful action.[174]Mīmāṃsāepistemology considers the holy scriptures known as theVedasas a key source of knowledge, addressing the problem of their right interpretation.[175]Jain epistemologystates that reality ismany-sided, meaning that no single viewpoint can capture the entirety of truth.[176]
Historical epistemologyexamines how the understanding of knowledge and related concepts has changed over time. It asks whether the main issues in epistemology are perennial and to what extent past epistemological theories are relevant to contemporary debates. It is particularly concerned with scientific knowledge and practices associated with it.[177]It contrasts with the history of epistemology, which presents, reconstructs, and evaluates epistemological theories of philosophers in the past.[178][s]
Some branches of epistemology focus on knowledge within specific academic disciplines. Theepistemology of scienceexamines how scientific knowledge is generated and what problems arise in the process of validating, justifying, and interpreting scientific claims. A key issue concerns the problem of howindividual observations can support universal scientific laws. Other topics include the nature of scientific evidence and the aims of science.[180]The epistemology of mathematics studies the origin of mathematical knowledge. In exploring how mathematical theories are justified, it investigates the role of proofs and whether there are empirical sources of mathematical knowledge.[181]
Distinct areas of epistemology are dedicated to specific sources of knowledge. Examples are the epistemology of perception,[182]the epistemology of memory,[183]and theepistemology of testimony.[184]In the epistemology of perception,direct and indirect realistsdebate the connection between the perceiver and the perceived object. Direct realists say that this connection is direct, meaning that there is no difference between the object present in perceptual experience and the physical object causing this experience. According to indirect realism, the connection is indirect, involving mental entities, like ideas or sense data, that mediate between the perceiver and the external world. The contrast between direct and indirect realism is important for explaining the nature ofillusions.[185]
Epistemological issues are found in most areas of philosophy. Theepistemology of logicexamines how people know that anargumentisvalid. For example, it explores how logicians justify thatmodus ponensis a correctrule of inferenceor that allcontradictionsare false.[186]Epistemologists of metaphysicsinvestigate whether knowledge of the basic structure of reality is possible and what sources this knowledge could have.[187]Knowledge of moral statements, like the claim that lying is wrong, belongs to theepistemology of ethics. It studies the role ofethical intuitions,coherenceamong moral beliefs, and the problem of moral disagreement.[188]Theethics of beliefis a closely related field exploring the intersection of epistemology andethics. It examines the norms governing belief formation and asks whether violating them is morally wrong.[189]Religious epistemologystudies the role of knowledge and justification for religious doctrines and practices. It evaluates the reliability of evidence fromreligious experienceandholy scriptureswhile also asking whether the norms of reason should be applied to religiousfaith.[190]
Epistemologists of language explore the nature of linguistic knowledge. One of their topics is the role of tacit knowledge, for example, when native speakers have mastered the rules ofgrammarbut are unable to explicitly articulate them.[191]Epistemologists of modality examine knowledge about what is possible and necessary.[192]Epistemic problems that arise when two people have diverging opinions on a topic are covered by the epistemology of disagreement.[193]Epistemologists of ignorance are interested in epistemic faults and gaps in knowledge.[194]
Epistemology andpsychologywere not defined as distinct fields until the 19th century; earlier investigations about knowledge often do not fit neatly into today's academic categories.[195]Both contemporary disciplines study beliefs and the mental processes responsible for their formation and change. One key contrast is that psychology describes what beliefs people have and how they acquire them, thereby explaining why someone has a specific belief. The focus of epistemology is on evaluating beliefs, leading to a judgment about whether a belief is justified and rational in a particular case.[196]Epistemology also shares a close connection withcognitive science, which understands mental events as processes that transforminformation.[197]Artificial intelligencerelies on the insights of epistemology and cognitive science to implement concrete solutions to problems associated withknowledge representationandautomatic reasoning.[198]
Logicis the study of correct reasoning. For epistemology, it is relevant to inferential knowledge, which arises when a person reasons from one known fact to another.[199]This is the case, for example, when inferring that it rained based on the observation that the streets are wet.[200]Whether an inferential belief amounts to knowledge depends on the form ofreasoningused, in particular, that the process does not violate thelaws of logic.[201]Another overlap between the two fields is found in the epistemic approach tofallacies.[202]Fallacies are faulty arguments based on incorrect reasoning.[203]The epistemic approach to fallacies explains why they are faulty, stating that arguments aim to expand knowledge. According to this view, an argument is a fallacy if it fails to do so.[202]A further intersection is found inepistemic logic, which uses formal logical devices to study epistemological concepts likeknowledgeandbelief.[204]
Bothdecision theoryand epistemology are interested in the foundations of rational thought and the role of beliefs. Unlike many approaches in epistemology, the main focus of decision theory lies less in the theoretical and more in the practical side, exploring how beliefs are translated into action.[205]Decision theorists examine the reasoning involved in decision-making and the standards of good decisions,[206]identifying beliefs as a central aspect of decision-making. One of their innovations is to distinguish between weaker and stronger beliefs, which helps them consider the effects of uncertainty on decisions.[207]
Epistemology andeducationhave a shared interest in knowledge, with one difference being that education focuses on the transmission of knowledge, exploring the roles of both learner and teacher.[208]Learning theoryexamines how people acquire knowledge.[209]Behaviorallearning theories explain the process in terms of behavior changes, for example, byassociating a certain response with a particular stimulus.[210]Cognitivelearning theories study how the cognitive processes that affect knowledge acquisition transform information.[211]Pedagogylooks at the transmission of knowledge from the teacher's perspective, exploring theteaching methodsthey may employ.[212]In teacher-centered methods, the teacher serves as the main authority delivering knowledge and guiding the learning process. Instudent-centered methods, the teacher primarily supports and facilitates the learning process, allowing students to take a more active role.[213]The beliefs students have about knowledge, calledpersonal epistemology, influence their intellectual development and learning success.[214]
Theanthropologyof knowledge examines how knowledge is acquired, stored, retrieved, and communicated. It studies the social and cultural circumstances that affect how knowledge is reproduced and changes, covering the role of institutions like university departments and scientific journals as well as face-to-face discussions and online communications. This field has a broad concept of knowledge, encompassing various forms of understanding and culture, including practical skills. Unlike epistemology, it is not interested in whether a belief is true or justified but in how understanding is reproduced in society.[215]A closely related field, thesociology of knowledgehas a similar conception of knowledge. It explores how physical, demographic, economic, and sociocultural factors impact knowledge. This field examines in what sociohistorical contexts knowledge emerges and the effects it has on people, for example, how socioeconomic conditions are related to thedominant ideologyin a society.[216]
Early reflections on the nature and sources of knowledge are found in ancient history. Inancient Greek philosophy,Plato(427–347 BCE) studiedwhat knowledge is, examining how it differs from trueopinionby being based on good reasons.[217]He proposed that learning isa form of recollectionin which the soul remembers what it already knew but had forgotten.[218][t]Plato's studentAristotle(384–322 BCE) was particularly interested in scientific knowledge, exploring the role of sensory experience and the process of making inferences from general principles.[219]Aristotle's ideas influenced theHellenistic schools of philosophy, which began to arise in the 4th century BCE and includedEpicureanism,Stoicism, andskepticism. The Epicureans had anempiricistoutlook, stating that sensations are always accurate and act as the supreme standard of judgments.[220]The Stoics defended a similar position but confined their trust to lucid and specific sensations, which they regarded as true.[221]The skeptics questioned that knowledge is possible, recommending insteadsuspension of judgmentto attain astate of tranquility.[222]Emerging in the 3rd century CE and inspired by Plato's philosophy,[223]Neoplatonismdistinguished knowledge from true belief, arguing that knowledge is infallible and limited to the realm of immaterial forms.[224]
TheUpanishads, philosophical scriptures composed inancient Indiabetween 700 and 300 BCE, examined how people acquire knowledge, including the role of introspection, comparison, and deduction.[226]In the 6th century BCE, the school ofAjñanadeveloped a radical skepticism questioning the possibility and usefulness of knowledge.[227]By contrast, the school ofNyaya, which emerged in the 2nd century BCE, asserted that knowledge is possible. It provided a systematic treatment of how people acquire knowledge, distinguishing between valid and invalid sources.[228]WhenBuddhist philosophersbecame interested in epistemology, they relied on concepts developed in Nyaya and other traditions.[229]Buddhist philosopherDharmakirti(6th or 7th century CE)[230]analyzed the process of knowing as a series of causally related events.[225]
AncientChinese philosophersunderstood knowledge as an interconnected phenomenon fundamentally linked to ethical behavior and social involvement. Many saw wisdom as the goal of attaining knowledge.[231]Mozi(470–391 BCE) proposed a pragmatic approach to knowledge using historical records, sensory evidence, and practical outcomes to validate beliefs.[232]Mencius(c.372–289 BCE) explored analogical reasoning as a source of knowledge and employed this method to criticize Mozi.[233]Xunzi(c.310–220 BCE) aimed to combine empirical observation and rational inquiry. He emphasized the importance of clarity and standards of reasoning without excluding the role of feeling and emotion.[234]
The relation betweenreasonandfaithwas a central topic in themedieval period.[235]InArabic–Persian philosophy,al-Farabi(c.870–950) andAverroes(1126–1198) discussed how philosophy andtheologyinteract, debating which one is a better vehicle to truth.[236]Al-Ghazali(c.1056–1111)criticized many core teachingsof previous Islamic philosophers, saying that they relied on unproven assumptions that did not amount to knowledge.[237]Similarly in Western philosophy,Anselm of Canterbury(1033–1109) proposed that theological teaching and philosophical inquiry are in harmony and complement each other.[238]Formulating a more critical approach,Peter Abelard(1079–1142) argued against unquestioned theological authorities and said that all things are open to rational doubt.[239]Influenced by Aristotle,Thomas Aquinas(1225–1274) developed an empiricist theory, stating that "nothing is in the intellect unless it first appeared in the senses".[240]According to an early form ofdirect realismproposed byWilliam of Ockham(c.1285–1349), perception of mind-independent objects happens directly without intermediaries.[241]Meanwhile, in 14th-century India,Gaṅgeśadeveloped a reliabilist theory of knowledge and considered the problems of testimony and fallacies.[242]In China,Wang Yangming(1472–1529) explored the unity of knowledge and action, holding that moral knowledge is inborn and can be attained by overcoming self-interest.[243]
The course ofmodern philosophywas shaped byRené Descartes(1596–1650), who stated that philosophy must begin from a position of indubitable knowledge of first principles. Inspired by skepticism, he aimed to find absolutely certain knowledge by encountering truths that cannot be doubted. He thought that this is the case for the assertion "I think, therefore I am", from which he constructed the rest of his philosophical system.[245]Descartes, together withBaruch Spinoza(1632–1677) andGottfried Wilhelm Leibniz(1646–1716), belonged to the school ofrationalism, which asserts that the mind possessesinnate ideasindependent of experience.[246]John Locke(1632–1704) rejected this view in favor of an empiricism according to which the mind is ablank slate. This means that all ideas depend on experience, either as "ideas of sense", which are directly presented through the senses, or as "ideas of reflection", which the mind creates by reflecting on its own activities.[247]David Hume(1711–1776) used this idea to explore the limits of what people can know. He said that knowledge of facts is never certain, adding that knowledge of relations between ideas, like mathematical truths, can be certain but contains no information about the world.[248]Immanuel Kant(1724–1804) sought a middle ground between rationalism and empiricism by identifying a type of knowledge overlooked by Hume. For Kant, this knowledge pertains to principles that underlie and structure all experience, such as spatial and temporal relations and fundamentalcategories of understanding.[249]
In the 19th century and influenced by Kant's philosophy,Georg Wilhelm Friedrich Hegel(1770–1831) rejected empiricism by arguing that sensory impressions alone cannot amount to knowledge since all knowledge is actively structured by the knowing subject.[250]John Stuart Mill(1806–1873), by contrast, defended a wide-sweeping form of empiricism and explained knowledge of general truths throughinductive reasoning.[251]Charles Peirce(1839–1914) thought that all knowledge isfallible, emphasizing that knowledge seekers should remain open to revising their beliefs in light of newevidence. He used this idea to argue against Cartesian foundationalism, which seeks absolutely certain truths.[252]
In the 20th century, fallibilism was further explored byJ. L. Austin(1911–1960) andKarl Popper(1902–1994).[253]Incontinental philosophy,Edmund Husserl(1859–1938) applied the skeptical idea of suspending judgment to thestudy of experience. By not judging whether an experience is accurate, he tried to describe its internal structure instead.[254]Influenced by earlier empiricists,logical positivists, likeA. J. Ayer(1910–1989), said that all knowledge is either empirical or analytic, rejecting any form of metaphysical knowledge.[255]Bertrand Russell(1872–1970) developed an empiricist sense-datum theory, distinguishing between directknowledge by acquaintanceof sense data and indirect knowledge by description, which is inferred from knowledge by acquaintance.[256]Common sensehad a central place inG. E. Moore's (1873–1958) epistemology. He used trivial observations, like the fact that he has two hands, to argue against abstract philosophical theories that deviate from common sense.[257]Ordinary language philosophy, as practiced by the lateLudwig Wittgenstein(1889–1951), is a similar approach that tries to extract epistemological insights from how ordinary language is used.[258]
Edmund Gettier(1927–2021) conceivedcounterexamplesagainst the idea that knowledge is justified true belief. These counterexamples prompted many philosophers to suggest alternativedefinitions of knowledge.[259]Developed by philosophers such asAlvin Goldman(1938–2024),reliabilismemerged as one of the alternatives, asserting that knowledge requires reliable sources and shifting the focus away from justification.[260]Virtue epistemologists, such asErnest Sosa(1940–present) andLinda Zagzebski(1946–present), analyse belief formation in terms of the intellectual virtues or cognitive competencies involved in the process.[261]Naturalized epistemology, as conceived byWillard Van Orman Quine(1908–2000), employs concepts and ideas from the natural sciences to formulate its theories.[262]Other developments in late 20th-century epistemology were the emergence ofsocial,feminist, andhistorical epistemology.[263]
|
https://en.wikipedia.org/wiki/Epistemology
|
Dynamic program analysisis the act ofanalyzing softwarethat involves executing aprogram– as opposed tostatic program analysis, which does not execute it.
Analysis can focus on different aspects of the software including but not limited to:behavior,test coverage,performanceandsecurity.
To be effective, the target program must be executed with sufficient test inputs[1]to address the ranges of possible inputs and outputs.Software testingmeasures, such ascode coverage, and tools such asmutation testing, are used to identify where testing is inadequate.
Functional testing includes relatively commonprogrammingtechniques such asunit testing,integration testingandsystem testing.[2]
Computing thecode coverageof a test identifies code that is not tested; not covered by a test.
Although this analysis identifies code that is not tested it does not determine whether tested coded isadequatelytested. Code can be executed even if the tests do not actually verify correct behavior.
Dynamic testing involves executing a program on a set of test cases.
Fuzzing is a testing technique that involves executing a program on a wide variety of inputs; often these inputs are randomly generated (at least in part).Gray-box fuzzersuse code coverage to guide input generation.
Dynamic symbolic execution (also known asDSEor concolic execution) involves executing a test program on a concrete input, collecting the path constraints associated with the execution, and using aconstraint solver(generally, anSMT solver) to generate new inputs that would cause the program to take a different control-flow path, thus increasing code coverage of the test suite.[3]DSE can be considered a type offuzzing("white-box" fuzzing).
Dynamic data-flow analysis tracks the flow of information fromsourcestosinks. Forms of dynamic data-flow analysis include dynamic taint analysis and evendynamic symbolic execution.[4][5]
Daikonis an implementation of dynamic invariant detection. Daikon runs a program, observes the values that
the program computes, and then reports properties that were true over the observed executions, and thus likely true over all executions.
Dynamic analysis can be used to detect security problems.
For a given subset of a program’s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. The reduced program is called a “slice” and is a faithful representation of the original program within the domain of the specified behavior subset.
Generally, finding a slice is an unsolvable problem, but by specifying the target behavior subset by the values of a set of variables, it is possible to obtain approximate slices using a data-flow algorithm. These slices are usually used by developers during debugging to locate the source of errors.
Mostperformance analysis toolsuse dynamic program analysis techniques.[citation needed]
Most dynamic analysis involvesinstrumentationor transformation.
Since instrumentation can affect runtime performance, interpretation of test results must account for this to avoid misidentifying a performance problem.
DynInst is a runtime code-patching library that is useful in developing dynamic program analysis probes and applying them to compiled executables. Dyninst does not requiresource codeor recompilation in general, however, non-stripped executables and executables with debugging symbols are easier to instrument.
Iroh.jsis a runtime code analysis library forJavaScript. It keeps track of the code execution path, provides runtime listeners to listen for specific executed code patterns and allows the interception and manipulation of the program's execution behavior.
|
https://en.wikipedia.org/wiki/Dynamic_program_analysis
|
Learning classifier systems, orLCS, are a paradigm ofrule-based machine learningmethods that combine a discovery component (e.g. typically agenetic algorithminevolutionary computation) with a learning component (performing eithersupervised learning,reinforcement learning, orunsupervised learning).[2]Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in apiecewisemanner in order to make predictions (e.g.behavior modeling,[3]classification,[4][5]data mining,[5][6][7]regression,[8]function approximation,[9]orgame strategy). This approach allows complexsolution spacesto be broken up into smaller, simpler parts for the reinforcement learning that is inside artificial intelligence research.
The founding concepts behind learning classifier systems came from attempts to modelcomplex adaptive systems, using rule-based agents to form an artificial cognitive system (i.e.artificial intelligence).
The architecture and components of a given learning classifier system can be quite variable. It is useful to think of an LCS as a machine consisting of several interacting components. Components may be added or removed, or existing components modified/exchanged to suit the demands of a given problem domain (like algorithmic building blocks) or to make the algorithm flexible enough to function in many different problem domains. As a result, the LCS paradigm can be flexibly applied to many problem domains that call formachine learning. The major divisions among LCS implementations are as follows: (1) Michigan-style architecture vs. Pittsburgh-style architecture,[10](2)reinforcement learningvs.supervised learning, (3) incremental learning vs. batch learning, (4)online learningvs.offline learning, (5) strength-based fitness vs. accuracy-based fitness, and (6) complete action mapping vs best action mapping. These divisions are not necessarily mutually exclusive. For example, XCS,[11]the best known and best studied LCS algorithm, is Michigan-style, was designed for reinforcement learning but can also perform supervised learning, applies incremental learning that can be either online or offline, applies accuracy-based fitness, and seeks to generate a complete action mapping.
Keeping in mind that LCS is a paradigm for genetic-based machine learning rather than a specific method, the following outlines key elements of a generic, modern (i.e. post-XCS) LCS algorithm. For simplicity let us focus on Michigan-style architecture with supervised learning. See the illustrations on the right laying out the sequential steps involved in this type of generic LCS.
The environment is the source of data upon which an LCS learns. It can be an offline, finitetraining dataset(characteristic of adata mining,classification, or regression problem), or an online sequential stream of live training instances. Each training instance is assumed to include some number offeatures(also referred to asattributes, orindependent variables), and a singleendpointof interest (also referred to as theclass,action,phenotype,prediction, ordependent variable). Part of LCS learning can involvefeature selection, therefore not all of the features in the training data need to be informative. The set of feature values of an instance is commonly referred to as thestate. For simplicity let's assume an example problem domain withBoolean/binaryfeatures and aBoolean/binaryclass. For Michigan-style systems, one instance from the environment is trained on each learning cycle (i.e. incremental learning). Pittsburgh-style systems perform batch learning, where rule sets are evaluated in each iteration over much or all of the training data.
A rule is a context dependent relationship between state values and some prediction. Rules typically take the form of an {IF:THEN} expression, (e.g. {IF 'condition' THEN 'action'},or as a more specific example,{IF 'red' AND 'octagon' THEN 'stop-sign'}). A critical concept in LCS and rule-based machine learning alike, is that an individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Think of a rule as a "local-model" of the solution space.
Rules can be represented in many different ways to handle different data types (e.g. binary, discrete-valued, ordinal, continuous-valued). Given binary data LCS traditionally applies a ternary rule representation (i.e. rules can include either a 0, 1, or '#' for each feature in the data). The 'don't care' symbol (i.e. '#') serves as a wild card within a rule's condition allowing rules, and the system as a whole to generalize relationships between features and the target endpoint to be predicted. Consider the following rule (#1###0 ~ 1) (i.e. condition ~ action). This rule can be interpreted as: IF the second feature = 1 AND the sixth feature = 0 THEN the class prediction = 1. We would say that the second and sixth features were specified in this rule, while the others were generalized. This rule, and the corresponding prediction are only applicable to an instance when the condition of the rule is satisfied by the instance. This is more commonly referred to as matching. In Michigan-style LCS, each rule has its own fitness, as well as a number of other rule-parameters associated with it that can describe the number of copies of that rule that exist (i.e. thenumerosity), the age of the rule, its accuracy, or the accuracy of its reward predictions, and other descriptive or experiential statistics. A rule along with its parameters is often referred to as aclassifier. In Michigan-style systems, classifiers are contained within apopulation[P] that has a user defined maximum number of classifiers. Unlike moststochasticsearch algorithms (e.g.evolutionary algorithms), LCS populations start out empty (i.e. there is no need to randomly initialize a rule population). Classifiers will instead be initially introduced to the population with a covering mechanism.
In any LCS, the trained model is a set of rules/classifiers, rather than any single rule/classifier. In Michigan-style LCS, the entire trained (and optionally, compacted) classifier population forms the prediction model.
One of the most critical and often time-consuming elements of an LCS is the matching process. The first step in an LCS learning cycle takes a single training instance from the environment and passes it to [P] where matching takes place. In step two, every rule in [P] is now compared to the training instance to see which rules match (i.e. are contextually relevant to the current instance). In step three, any matching rules are moved to amatch set[M]. A rule matches a training instance if all feature values specified in the rule condition are equivalent to the corresponding feature value in the training instance. For example, assuming the training instance is (001001 ~ 0), these rules would match: (###0## ~ 0), (00###1 ~ 0), (#01001 ~ 1), but these rules would not (1##### ~ 0), (000##1 ~ 0), (#0#1#0 ~ 1). Notice that in matching, the endpoint/action specified by the rule is not taken into consideration. As a result, the match set may contain classifiers that propose conflicting actions. In the fourth step, since we are performing supervised learning, [M] is divided into a correct set [C] and an incorrect set [I]. A matching rule goes into the correct set if it proposes the correct action (based on the known action of the training instance), otherwise it goes into [I]. In reinforcement learning LCS, an action set [A] would be formed here instead, since the correct action is not known.
At this point in the learning cycle, if no classifiers made it into either [M] or [C] (as would be the case when the population starts off empty), the covering mechanism is applied (fifth step). Covering is a form ofonline smart population initialization. Covering randomly generates a rule that matches the current training instance (and in the case of supervised learning, that rule is also generated with the correct action. Assuming the training instance is (001001 ~ 0), covering might generate any of the following rules: (#0#0## ~ 0), (001001 ~ 0), (#010## ~ 0). Covering not only ensures that each learning cycle there is at least one correct, matching rule in [C], but that any rule initialized into the population will match at least one training instance. This prevents LCS from exploring the search space of rules that do not match any training instances.
In the sixth step, the rule parameters of any rule in [M] are updated to reflect the new experience gained from the current training instance. Depending on the LCS algorithm, a number of updates can take place at this step. For supervised learning, we can simply update the accuracy/error of a rule. Rule accuracy/error is different than model accuracy/error, since it is not calculated over the entire training data, but only over all instances that it matched. Rule accuracy is calculated by dividing the number of times the rule was in a correct set [C] by the number of times it was in a match set [M]. Rule accuracy can be thought of as a 'local accuracy'. Rule fitness is also updated here, and is commonly calculated as a function of rule accuracy. The concept of fitness is taken directly from classicgenetic algorithms. Be aware that there are many variations on how LCS updates parameters in order to perform credit assignment and learning.
In the seventh step, asubsumptionmechanism is typically applied. Subsumption is an explicit generalization mechanism that merges classifiers that cover redundant parts of the problem space. The subsuming classifier effectively absorbs the subsumed classifier (and has its numerosity increased). This can only happen when the subsuming classifier is more general, just as accurate, and covers all of the problem space of the classifier it subsumes.
In the eighth step, LCS adopts a highly elitistgenetic algorithm(GA) which will select two parent classifiers based on fitness (survival of the fittest). Parents are selected from [C] typically usingtournament selection. Some systems have appliedroulette wheel selectionor deterministic selection, and have differently selected parent rules from either [P] - panmictic selection, or from [M]).Crossoverandmutationoperators are now applied to generate two new offspring rules. At this point, both the parent and offspring rules are returned to [P]. The LCSgenetic algorithmis highly elitist since each learning iteration, the vast majority of the population is preserved. Rule discovery may alternatively be performed by some other method, such as anestimation of distribution algorithm, but a GA is by far the most common approach. Evolutionary algorithms like the GA employ a stochastic search, which makes LCS a stochastic algorithm. LCS seeks to cleverly explore the search space, but does not perform an exhaustive search of rule combinations, and is not guaranteed to converge on an optimal solution.
The last step in a generic LCS learning cycle is to maintain the maximum population size. The deletion mechanism will select classifiers for deletion (commonly using roulette wheel selection). The probability of a classifier being selected for deletion is inversely proportional to its fitness. When a classifier is selected for deletion, its numerosity parameter is reduced by one. When the numerosity of a classifier is reduced to zero, it is removed entirely from the population.
LCS will cycle through these steps repeatedly for some user defined number of training iterations, or until some user defined termination criteria have been met. For online learning, LCS will obtain a completely new training instance each iteration from the environment. For offline learning, LCS will iterate through a finite training dataset. Once it reaches the last instance in the dataset, it will go back to the first instance and cycle through the dataset again.
Once training is complete, the rule population will inevitably contain some poor, redundant and inexperienced rules. It is common to apply arule compaction, orcondensationheuristic as a post-processing step. This resulting compacted rule population is ready to be applied as a prediction model (e.g. make predictions on testing instances), and/or to be interpreted forknowledge discovery.
Whether or not rule compaction has been applied, the output of an LCS algorithm is a population of classifiers which can be applied to making predictions on previously unseen instances. The prediction mechanism is not part of the supervised LCS learning cycle itself, however it would play an important role in a reinforcement learning LCS learning cycle. For now we consider how the prediction mechanism can be applied for making predictions to test data. When making predictions, the LCS learning components are deactivated so that the population does not continue to learn from incoming testing data. A test instance is passed to [P] where a match set [M] is formed as usual. At this point the match set is differently passed to a prediction array. Rules in the match set can predict different actions, therefore a voting scheme is applied. In a simple voting scheme, the action with the strongest supporting 'votes' from matching rules wins, and becomes the selected prediction. All rules do not get an equal vote. Rather the strength of the vote for a single rule is commonly proportional to its numerosity and fitness. This voting scheme and the nature of how LCS's store knowledge, suggests that LCS algorithms are implicitlyensemble learners.
Individual LCS rules are typically human readable IF:THEN expression. Rules that constitute the LCS prediction model can be ranked by different rule parameters and manually inspected. Global strategies to guide knowledge discovery using statistical and graphical have also been proposed.[12][13]With respect to other advanced machine learning approaches, such asartificial neural networks,random forests, orgenetic programming, learning classifier systems are particularly well suited to problems that require interpretable solutions.
John Henry Hollandwas best known for his work popularizinggenetic algorithms(GA), through his ground-breaking book "Adaptation in Natural and Artificial Systems"[14]in 1975 and his formalization ofHolland's schema theorem. In 1976, Holland conceptualized an extension of the GA concept to what he called a "cognitive system",[15]and provided the first detailed description of what would become known as the first learning classifier system in the paper "Cognitive Systems based on Adaptive Algorithms".[16]This first system, namedCognitive System One (CS-1)was conceived as a modeling tool, designed to model a real system (i.e.environment) with unknown underlying dynamics using a population of human readable rules. The goal was for a set of rules to performonline machine learningto adapt to the environment based on infrequent payoff/reward (i.e. reinforcement learning) and apply these rules to generate a behavior that matched the real system. This early, ambitious implementation was later regarded as overly complex, yielding inconsistent results.[2][17]
Beginning in 1980,Kenneth de Jongand his student Stephen Smith took a different approach to rule-based machine learning with(LS-1), where learning was viewed as an offline optimization process rather than an online adaptation process.[18][19][20]This new approach was more similar to a standard genetic algorithm but evolved independent sets of rules. Since that time LCS methods inspired by the online learning framework introduced by Holland at the University of Michigan have been referred to asMichigan-style LCS, and those inspired by Smith and De Jong at the University of Pittsburgh have been referred to asPittsburgh-style LCS.[2][17]In 1986, Holland developed what would be considered the standard Michigan-style LCS for the next decade.[21]
Other important concepts that emerged in the early days of LCS research included (1) the formalization of abucket brigade algorithm(BBA) for credit assignment/learning,[22](2) selection of parent rules from a common 'environmental niche' (i.e. thematch set[M]) rather than from the wholepopulation[P],[23](3)covering, first introduced as acreateoperator,[24](4) the formalization of anaction set[A],[24](5) a simplified algorithm architecture,[24](6)strength-based fitness,[21](7) consideration of single-step, or supervised learning problems[25]and the introduction of thecorrect set[C],[26](8)accuracy-based fitness[27](9) the combination of fuzzy logic with LCS[28](which later spawned a lineage offuzzy LCS algorithms), (10) encouraginglong action chainsanddefault hierarchiesfor improving performance on multi-step problems,[29][30][31](11) examininglatent learning(which later inspired a new branch ofanticipatory classifier systems(ACS)[32]), and (12) the introduction of the firstQ-learning-like credit assignment technique.[33]While not all of these concepts are applied in modern LCS algorithms, each were landmarks in the development of the LCS paradigm.
Interest in learning classifier systems was reinvigorated in the mid 1990s largely due to two events; the development of theQ-Learningalgorithm[34]forreinforcement learning, and the introduction of significantly simplified Michigan-style LCS architectures by Stewart Wilson.[11][35]Wilson'sZeroth-level Classifier System (ZCS)[35]focused on increasing algorithmic understandability based on Hollands standard LCS implementation.[21]This was done, in part, by removing rule-bidding and the internal message list, essential to the original BBA credit assignment, and replacing it with a hybrid BBA/Q-Learningstrategy. ZCS demonstrated that a much simpler LCS architecture could perform as well as the original, more complex implementations. However, ZCS still suffered from performance drawbacks including the proliferation of over-general classifiers.
In 1995, Wilson published his landmark paper, "Classifier fitness based on accuracy" in which he introduced the classifier systemXCS.[11]XCS took the simplified architecture of ZCS and added an accuracy-based fitness, a niche GA (acting in the action set [A]), an explicit generalization mechanism calledsubsumption, and an adaptation of theQ-Learningcredit assignment. XCS was popularized by its ability to reach optimal performance while evolving accurate and maximally general classifiers as well as its impressive problem flexibility (able to perform bothreinforcement learningandsupervised learning). XCS later became the best known and most studied LCS algorithm and defined a new family ofaccuracy-based LCS. ZCS alternatively became synonymous withstrength-based LCS. XCS is also important, because it successfully bridged the gap between LCS and the field ofreinforcement learning. Following the success of XCS, LCS were later described as reinforcement learning systems endowed with a generalization capability.[36]Reinforcement learningtypically seeks to learn a value function that maps out a complete representation of the state/action space. Similarly, the design of XCS drives it to form an all-inclusive and accurate representation of the problem space (i.e. acomplete map) rather than focusing on high payoff niches in the environment (as was the case with strength-based LCS). Conceptually, complete maps don't only capture what you should do, or what is correct, but also what you shouldn't do, or what's incorrect. Differently, most strength-based LCSs, or exclusively supervised learning LCSs seek a rule set of efficient generalizations in the form of abest action map(or apartial map). Comparisons between strength vs. accuracy-based fitness and complete vs. best action maps have since been examined in greater detail.[37][38]
XCS inspired the development of a whole new generation of LCS algorithms and applications. In 1995, Congdon was the first to apply LCS to real-worldepidemiologicalinvestigations of disease[39]followed closely by Holmes who developed theBOOLE++,[40]EpiCS,[41]and laterEpiXCS[42]forepidemiologicalclassification. These early works inspired later interest in applying LCS algorithms to complex and large-scaledata miningtasks epitomized bybioinformaticsapplications. In 1998, Stolzmann introducedanticipatory classifier systems (ACS)which included rules in the form of 'condition-action-effect, rather than the classic 'condition-action' representation.[32]ACS was designed to predict the perceptual consequences of an action in all possible situations in an environment. In other words, the system evolves a model that specifies not only what to do in a given situation, but also provides information of what will happen after a specific action will be executed. This family of LCS algorithms is best suited to multi-step problems, planning, speeding up learning, or disambiguating perceptual aliasing (i.e. where the same observation is obtained in distinct states but requires different actions). Butz later pursued this anticipatory family of LCS developing a number of improvements to the original method.[43]In 2002, Wilson introducedXCSF, adding a computed action in order to perform function approximation.[44]In 2003, Bernado-Mansilla introduced asUpervised Classifier System (UCS), which specialized the XCS algorithm to the task ofsupervised learning, single-step problems, and forming a best action set. UCS removed thereinforcement learningstrategy in favor of a simple, accuracy-based rule fitness as well as the explore/exploit learning phases, characteristic of many reinforcement learners. Bull introduced a simple accuracy-based LCS(YCS)[45]and a simple strength-based LCSMinimal Classifier System (MCS)[46]in order to develop a better theoretical understanding of the LCS framework. Bacardit introducedGAssist[47]andBioHEL,[48]Pittsburgh-style LCSs designed fordata miningandscalabilityto large datasets inbioinformaticsapplications. In 2008, Drugowitsch published the book titled "Design and Analysis of Learning Classifier Systems" including some theoretical examination of LCS algorithms.[49]Butz introduced the first rule online learning visualization within aGUIfor XCSF[1](see the image at the top of this page). Urbanowicz extended the UCS framework and introducedExSTraCS,explicitly designed forsupervised learningin noisy problem domains (e.g. epidemiology and bioinformatics).[50]ExSTraCS integrated (1) expert knowledge to drive covering and genetic algorithm towards important features in the data,[51](2) a form of long-term memory referred to as attribute tracking,[52]allowing for more efficient learning and the characterization of heterogeneous data patterns, and (3) a flexible rule representation similar to Bacardit's mixed discrete-continuous attribute list representation.[53]Both Bacardit and Urbanowicz explored statistical and visualization strategies to interpret LCS rules and perform knowledge discovery for data mining.[12][13]Browne and Iqbal explored the concept of reusing building blocks in the form of code fragments and were the first to solve the 135-bit multiplexer benchmark problem by first learning useful building blocks from simpler multiplexer problems.[54]ExSTraCS 2.0was later introduced to improve Michigan-style LCS scalability, successfully solving the 135-bit multiplexer benchmark problem for the first time directly.[5]The n-bitmultiplexerproblem is highlyepistaticandheterogeneous, making it a very challengingmachine learningtask.
Michigan-Style LCSs are characterized by a population of rules where the genetic algorithm operates at the level of individual rules and the solution is represented by the entire rule population. Michigan style systems also learn incrementally which allows them to perform both reinforcement learning and supervised learning, as well as both online and offline learning. Michigan-style systems have the advantage of being applicable to a greater number of problem domains, and the unique benefits of incremental learning.
Pittsburgh-Style LCSs are characterized by a population of variable length rule-sets where each rule-set is a potential solution. The genetic algorithm typically operates at the level of an entire rule-set. Pittsburgh-style systems can also uniquely evolve ordered rule lists, as well as employ a default rule. These systems have the natural advantage of identifying smaller rule sets, making these systems more interpretable with regards to manual rule inspection.
Systems that seek to combine key strengths of both systems have also been proposed.
The name, "Learning Classifier System (LCS)", is a bit misleading since there are manymachine learningalgorithms that 'learn to classify' (e.g.decision trees,artificial neural networks), but are not LCSs. The term 'rule-based machine learning (RBML)' is useful, as it more clearly captures the essential 'rule-based' component of these systems, but it also generalizes to methods that are not considered to be LCSs (e.g.association rule learning, orartificial immune systems). More general terms such as, 'genetics-based machine learning', and even 'genetic algorithm'[39]have also been applied to refer to what would be more characteristically defined as a learning classifier system. Due to their similarity togenetic algorithms, Pittsburgh-style learning classifier systems are sometimes generically referred to as 'genetic algorithms'. Beyond this, some LCS algorithms, or closely related methods, have been referred to as 'cognitive systems',[16]'adaptive agents', 'production systems', or generically as a 'classifier system'.[55][56]This variation in terminology contributes to some confusion in the field.
Up until the 2000s nearly all learning classifier system methods were developed with reinforcement learning problems in mind. As a result, the term ‘learning classifier system’ was commonly defined as the combination of ‘trial-and-error’ reinforcement learning with the global search of a genetic algorithm. Interest in supervised learning applications, and even unsupervised learning have since broadened the use and definition of this term.
|
https://en.wikipedia.org/wiki/Learning_classifier_system
|
Mobile identityis a development of onlineauthenticationanddigital signatures, where theSIMcard of one's mobile phone works as an identity tool. Mobile identity enables legally binding authentication and transaction signing foronline banking, payment confirmation, corporate services, and consuming online content. The user's certificates are maintained on the telecom operator's SIM card and in order to use them, the user has to enter a personal, secretPIN code. When using mobile identity, no separatecard readeris needed, as the phone itself already performs both functions.
In contrast to other approaches, the mobile phone in conjunction with amobile signature-enabled SIM card aims to offer the same security and ease of use as for examplesmart cardsin existingdigital identitymanagement systems. Smart card-based digital identities can only be used in conjunction with a card reader and aPC. In addition, distributing and managing the cards can be logistically difficult, exacerbated by the lack ofinteroperabilitybetween services relying on such a digital identity.[citation needed]
There are a number of private companystakeholdersthat have an inherent interest in setting up a mobile signature service infrastructure to offer mobile identity services. These stakeholders aremobile network operatorsand, to a certain extent, financial institutions or service providers with an existing large customer base, that could leverage the use of mobile signatures across several applications.
TheFinnish governmenthas supervised the deployment of a common derivative of theETSI-based mobile signature service standard, thus allowing theFinnishmobile operators to offer mobile signature services. The Finnish governmentcertificate authority(CA) also issues the certificates that link thedigitalkeys on theSIMcard to the person's real world identity.[1][2][3]
Through national mobile register program Iranian customs administration and ministry of ict registers database fromIMEIof imported legally phones and allows Iranian citizens to only access full Iranian mobile phone operators national roaming network if they have linked their national ID to both Simcards and also non contraband/smuggled IMEI number.[4]
In theNordic region, governments, public sector and financial institutions are increasingly offering online and mobile channels to access their services. InSwedenthe WPK consortium, owned by banks and mobile operators, specifies a mobile signature service infrastructure that is used by banks to authenticate online banking users.
Telenor Sverigehas provided technology for the company's mobile signature services in Sweden since 2009. Telenor enables its customers a secure login to online services using their mobile phone for authentication and digital signing.[5]
TheEstonian governmentissues all citizens with a smart card and digital identity called theEstonian ID card. Additionally,Sertifitseerimiskeskus, thecertificate authorityof Estonia issues special SIM cards to mobile phones which act as national personal identification method. The service is calledm-id.
In 2007, the mobile operatorTurkcellbought a mobile signature service infrastructure Gemalto and launched Mobillmza, the world's first mobile security solution.[6][7]They have partnered up with over 200 businesses, including many banks to enable them to use mobile signatures for online user authentication.[8]
Other services relying on mobile signatures in Turkey include securing the withdrawal of small loans from anATM, and processing custom work flow processes by enabling applicants to use mobile signatures.[9][10][11][12]
TheAustrian governmentallows private sector companies to propose means for storing the government-controlled digital identity. Since 2006, the Austrian government has explicitly mentioned mobile phones as one of the likely devices to be used for storing and managing adigital identity. Eight Austrian saving banks will launch[when?]a pilot allowing online user authentication with mobile signatures.[13]
In Ukraine,Mobile IDproject started in 2015, and later declared as one ofGovernment of Ukrainepriorities supported by EU. At the beginning of 2018 Ukrainian cell operators are evaluating proposals and testing platforms from different local and foreign developers. Platform selection will be followed up by comprehensive certification process.
Ukrainian IT andcryptographyaround Mobile ID topic is mostly presented byInnovation Development HUB LLCwith its ownMobile ID platform. This particular solution is the sole, having already passed the certification, and most likely will be implemented in Ukraine.
As of September 2019, all of 'big three' cell operators in Ukraine have launched Mobile ID service.
Vodafone- commercial launch in August 2018.
Kyivstar- commercial launch in December 2018.
Lifecell- commercial launch in August 2019.
Vodafone and Lifecell operators implemented Mobile ID solution of Ukrainian origin designed by Innovation Development HUB LLC.
|
https://en.wikipedia.org/wiki/Mobile_identity_management
|
Incryptography, azero-knowledge proof(also known as aZK prooforZKP) is a protocol in which one party (the prover) can convince another party (the verifier) that some given statement is true, without conveying to the verifier any informationbeyondthe mere fact of that statement's truth.[1]The intuition underlying zero-knowledge proofs is that it is trivial to prove possession of the relevant information simply by revealing it; the hard part is to prove this possession without revealing this information (or any aspect of it whatsoever).[2]
In light of the fact that one should be able to generate a proof of some statementonlywhen in possession of certain secret information connected to the statement, the verifier, even after having become convinced of the statement's truth, should nonetheless remain unable to prove the statement to further third parties.
Zero-knowledge proofs can be interactive, meaning that the prover and verifier exchange messages according to some protocol, or noninteractive, meaning that the verifier is convinced by a single prover message and no other communication is needed. In thestandard model, interaction is required, except for trivial proofs ofBPPproblems.[3]In thecommon random stringandrandom oraclemodels,non-interactive zero-knowledge proofsexist. TheFiat–Shamir heuristiccan be used to transform certain interactive zero-knowledge proofs into noninteractive ones.[4][5][6]
There is a well-known story presenting the fundamental ideas of zero-knowledge proofs, first published in 1990 byJean-Jacques Quisquaterand others in their paper "How to Explain Zero-Knowledge Protocols to Your Children".[7]The two parties in the zero-knowledge proof story arePeggyas the prover of the statement, andVictor, the verifier of the statement.
In this story, Peggy has uncovered the secret word used to open a magic door in a cave. The cave is shaped like a ring, with the entrance on one side and the magic door blocking the opposite side. Victor wants to know whether Peggy knows the secret word; but Peggy, being a very private person, does not want to reveal her knowledge (the secret word) to Victor or to reveal the fact of her knowledge to the world in general.
They label the left and right paths from the entrance A and B. First, Victor waits outside the cave as Peggy goes in. Peggy takes either path A or B; Victor is not allowed to see which path she takes. Then, Victor enters the cave and shouts the name of the path he wants her to use to return, either A or B, chosen at random. Providing she really does know the magic word, this is easy: she opens the door, if necessary, and returns along the desired path.
However, suppose she did not know the word. Then, she would only be able to return by the named path if Victor were to give the name of the same path by which she had entered. Since Victor would choose A or B at random, she would have a 50% chance of guessing correctly. If they were to repeat this trick many times, say 20 times in a row, her chance of successfully anticipating all of Victor's requests would be reduced to 1 in 220, or 9.54×10−7.
Thus, if Peggy repeatedly appears at the exit Victor names, then he can conclude that it is extremely probable that Peggy does, in fact, know the secret word.
One side note with respect to third-party observers: even if Victor is wearing a hidden camera that records the whole transaction, the only thing the camera will record is in one case Victor shouting "A!" and Peggy appearing at A or in the other case Victor shouting "B!" and Peggy appearing at B. A recording of this type would be trivial for any two people to fake (requiring only that Peggy and Victor agree beforehand on the sequence of As and Bs that Victor will shout). Such a recording will certainly never be convincing to anyone but the original participants. In fact, even a person who was present as an observer at the original experiment should be unconvinced, since Victor and Peggy could have orchestrated the whole "experiment" from start to finish.
Further, if Victor chooses his As and Bs by flipping a coin on-camera, this protocol loses its zero-knowledge property; the on-camera coin flip would probably be convincing to any person watching the recording later. Thus, although this does not reveal the secret word to Victor, it does make it possible for Victor to convince the world in general that Peggy has that knowledge—counter to Peggy's stated wishes. However, digital cryptography generally "flips coins" by relying on apseudo-random number generator, which is akin to a coin with a fixed pattern of heads and tails known only to the coin's owner. If Victor's coin behaved this way, then again it would be possible for Victor and Peggy to have faked the experiment, so using a pseudo-random number generator would not reveal Peggy's knowledge to the world in the same way that using a flipped coin would.
Peggy could prove to Victor that she knows the magic word, without revealing it to him, in a single trial. If both Victor and Peggy go together to the mouth of the cave, Victor can watch Peggy go in through A and come out through B. This would prove with certainty that Peggy knows the magic word, without revealing the magic word to Victor. However, such a proof could be observed by a third party, or recorded by Victor and such a proof would be convincing to anybody. In other words, Peggy could not refute such proof by claiming she colluded with Victor, and she is therefore no longer in control of who is aware of her knowledge.
Imagine your friend "Victor" is red-greencolour-blind(while you are not) and you have two balls: one red and one green, but otherwise identical. To Victor, the balls seem completely identical. Victor is skeptical that the balls are actually distinguishable. You want toprove to Victor that the balls are in fact differently coloured, but nothing else. In particular, you do not want to reveal which ball is the red one and which is the green.
Here is the proof system: You give the two balls to Victor and he puts them behind his back. Next, he takes one of the balls and brings it out from behind his back and displays it. He then places it behind his back again and then chooses to reveal just one of the two balls, picking one of the two at random with equal probability. He will ask you, "Did I switch the ball?" This whole procedure is then repeated as often as necessary.
By looking at the balls' colours, you can, of course, say with certainty whether or not he switched them. On the other hand, if the balls were the same colour and hence indistinguishable, your ability to determine whether a switch occurred would be no better than random guessing. Since the probability that you would have randomly succeeded at identifying each switch/non-switch is 50%, the probability of having randomly succeeded atallswitch/non-switches approaches zero.
Over multiple trials, the success rate wouldstatistically convergeto 50%, and you could not achieve a performance significantly better than chance. If you and your friend repeat this "proof" multiple times (e.g. 20 times), your friend should become convinced that the balls are indeed differently coloured.
The above proof iszero-knowledgebecause your friend never learns which ball is green and which is red; indeed, he gains no knowledge about how to distinguish the balls.[8]
One well-known example of a zero-knowledge proof is the "Where's Wally" example. In this example, the prover wants to prove to the verifier that they know where Wally is on a page in aWhere's Wally?book, without revealing his location to the verifier.[9]
The prover starts by taking a large black board with a small hole in it, the size of Wally. The board is twice the size of the book in both directions, so the verifier cannot see where on the page the prover is placing it. The prover then places the board over the page so that Wally is in the hole.[9]
The verifier can now look through the hole and see Wally, but cannot see any other part of the page. Therefore, the prover has proven to the verifier that they know where Wally is, without revealing any other information about his location.[9]
This example is not a perfect zero-knowledge proof, because the prover does reveal some information about Wally's location, such as his body position. However, it is a decent illustration of the basic concept of a zero-knowledge proof.
A zero-knowledge proof of some statement must satisfy three properties:
The first two of these are properties of more generalinteractive proof systems. The third is what makes the proof zero-knowledge.[10]
Zero-knowledge proofs are not proofs in the mathematical sense of the term because there is some small probability, thesoundness error, that a cheating prover will be able to convince the verifier of a false statement. In other words, zero-knowledge proofs are probabilistic "proofs" rather than deterministic proofs. However, there are techniques to decrease the soundness error to negligibly small values (for example, guessing correctly on a hundred or thousand binary decisions has a 1/2100or 1/21000soundness error, respectively. As the number of bits increases, the soundness error decreases toward zero).
A formal definition of zero-knowledge must use some computational model, the most common one being that of aTuring machine. LetP,V, andSbe Turing machines. Aninteractive proof systemwith(P,V)for a languageLis zero-knowledge if for anyprobabilistic polynomial time(PPT) verifierV^{\displaystyle {\hat {V}}}there exists a PPT simulatorSsuch that:
whereViewV^{\displaystyle {\hat {V}}}[P(x)↔V^{\displaystyle {\hat {V}}}(x,z)]is a record of the interactions betweenP(x)andV(x,z). The proverPis modeled as having unlimited computation power (in practice,Pusually is aprobabilistic Turing machine). Intuitively, the definition states that an interactive proof system(P,V)is zero-knowledge if for any verifierV^{\displaystyle {\hat {V}}}there exists an efficient simulatorS(depending onV^{\displaystyle {\hat {V}}}) that can reproduce the conversation betweenPandV^{\displaystyle {\hat {V}}}on any given input. The auxiliary stringzin the definition plays the role of "prior knowledge" (including the random coins ofV^{\displaystyle {\hat {V}}}). The definition implies thatV^{\displaystyle {\hat {V}}}cannot use any prior knowledge stringzto mine information out of its conversation withP, because ifSis also given this prior knowledge then it can reproduce the conversation betweenV^{\displaystyle {\hat {V}}}andPjust as before.[citation needed]
The definition given is that of perfect zero-knowledge. Computational zero-knowledge is obtained by requiring that the views of the verifierV^{\displaystyle {\hat {V}}}and the simulator are onlycomputationally indistinguishable, given the auxiliary string.[citation needed]
These ideas can be applied to a more realistic cryptography application. Peggy wants to prove to Victor that she knows thediscrete logarithmof a given value in a givengroup.[11]
For example, given a valuey, a largeprimep, and a generatorg{\displaystyle g}, she wants to prove that she knows a valuexsuch thatgx≡y(modp), without revealingx. Indeed, knowledge ofxcould be used as a proof of identity, in that Peggy could have such knowledge because she chose a random valuexthat she did not reveal to anyone, computedy=gxmodp, and distributed the value ofyto all potential verifiers, such that at a later time, proving knowledge ofxis equivalent to proving identity as Peggy.
The protocol proceeds as follows: in each round, Peggy generates a random numberr, computesC=grmodpand discloses this to Victor. After receivingC, Victor randomly issues one of the following two requests: he either requests that Peggy discloses the value ofr, or the value of(x+r) mod (p− 1).
Victor can verify either answer; if he requestedr, he can then computegrmodpand verify that it matchesC. If he requested(x+r) mod (p− 1), then he can verify thatCis consistent with this, by computingg(x+r) mod (p− 1)modpand verifying that it matches(C·y) modp. If Peggy indeed knows the value ofx, then she can respond to either one of Victor's possible challenges.
If Peggy knew or could guess which challenge Victor is going to issue, then she could easily cheat and convince Victor that she knowsxwhen she does not: if she knows that Victor is going to requestr, then she proceeds normally: she picksr, computesC=grmodp, and disclosesCto Victor; she will be able to respond to Victor's challenge. On the other hand, if she knows that Victor will request(x+r) mod (p− 1), then she picks a random valuer′, computesC′ ≡gr′· (gx)−1modp, and disclosesC′to Victor as the value ofCthat he is expecting. When Victor challenges her to reveal(x+r) mod (p− 1), she revealsr′, for which Victor will verify consistency, since he will in turn computegr′modp, which matchesC′ ·y, since Peggy multiplied by themodular multiplicative inverseofy.
However, if in either one of the above scenarios Victor issues a challenge other than the one she was expecting and for which she manufactured the result, then she will be unable to respond to the challenge under the assumption of infeasibility of solving the discrete log for this group. If she pickedrand disclosedC=grmodp, then she will be unable to produce a valid(x+r) mod (p− 1)that would pass Victor's verification, given that she does not knowx. And if she picked a valuer′that poses as(x+r) mod (p− 1), then she would have to respond with the discrete log of the value that she disclosed – but Peggy does not know this discrete log, since the valueCshe disclosed was obtained through arithmetic with known values, and not by computing a power with a known exponent.
Thus, a cheating prover has a 0.5 probability of successfully cheating in one round. By executing a large-enough number of rounds, the probability of a cheating prover succeeding can be made arbitrarily low.
To show that the above interactive proof gives zero knowledge other than the fact that Peggy knowsx, one can use similar arguments as used in the above proof of completeness and soundness. Specifically, a simulator, say Simon, who does not knowx, can simulate the exchange between Peggy and Victor by the following procedure. Firstly, Simon randomly flips a fair coin. If the result is "heads", then he picks a random valuer, computesC=grmodp, and disclosesCas if it is a message from Peggy to Victor. Then Simon also outputs a message "request the value ofr" as if it is sent from Victor to Peggy, and immediately outputs the value ofras if it is sent from Peggy to Victor. A single round is complete. On the other hand, if the coin flipping result is "tails", then Simon picks a random numberr′, computesC′ =gr′·y−1modp, and disclosesC′as if it is a message from Peggy to Victor. Then Simon outputs "request the value of(x+r) mod (p− 1)" as if it is a message from Victor to Peggy. Finally, Simon outputs the value ofr′as if it is the response from Peggy back to Victor. A single round is complete. By the previous arguments when proving the completeness and soundness, the interactive communication simulated by Simon is indistinguishable from the true correspondence between Peggy and Victor. The zero-knowledge property is thus guaranteed.
The following scheme is due toManuel Blum.[12]
In this scenario, Peggy knows aHamiltonian cyclefor a largegraphG. Victor knowsGbut not the cycle (e.g., Peggy has generatedGand revealed it to him.) Finding a Hamiltonian cycle given a large graph is believed to be computationally infeasible, since its corresponding decision version is known to beNP-complete. Peggy will prove that she knows the cycle without simply revealing it (perhaps Victor is interested in buying it but wants verification first, or maybe Peggy is the only one who knows this information and is proving her identity to Victor).
To show that Peggy knows this Hamiltonian cycle, she and Victor play several rounds of a game:
It is important that the commitment to the graph be such that Victor can verify, in the second case, that the cycle is really made of edges fromH. This can be done by, for example, committing to every edge (or lack thereof) separately.
If Peggy does know a Hamiltonian cycle inG, then she can easily satisfy Victor's demand for either the graph isomorphism producingHfromG(which she had committed to in the first step) or a Hamiltonian cycle inH(which she can construct by applying the isomorphism to the cycle inG).
Peggy's answers do not reveal the original Hamiltonian cycle inG. In each round, Victor will learn onlyH's isomorphism toGor a Hamiltonian cycle inH. He would need both answers for a singleHto discover the cycle inG, so the information remains unknown as long as Peggy can generate a distinctHevery round. If Peggy does not know of a Hamiltonian cycle inG, but somehow knew in advance what Victor would ask to see each round, then she could cheat. For example, if Peggy knew ahead of time that Victor would ask to see the Hamiltonian cycle inH, then she could generate a Hamiltonian cycle for an unrelated graph. Similarly, if Peggy knew in advance that Victor would ask to see the isomorphism then she could simply generate an isomorphic graphH(in which she also does not know a Hamiltonian cycle). Victor could simulate the protocol by himself (without Peggy) because he knows what he will ask to see. Therefore, Victor gains no information about the Hamiltonian cycle inGfrom the information revealed in each round.
If Peggy does not know the information, then she can guess which question Victor will ask and generate either a graph isomorphic toGor a Hamiltonian cycle for an unrelated graph, but since she does not know a Hamiltonian cycle forG, she cannot do both. With this guesswork, her chance of fooling Victor is2−n, wherenis the number of rounds. For all realistic purposes, it is infeasibly difficult to defeat a zero-knowledge proof with a reasonable number of rounds in this way.
Different variants of zero-knowledge can be defined by formalizing the intuitive concept of what is meant by the output of the simulator "looking like" the execution of the real proof protocol in the following ways:
There are various types of zero-knowledge proofs:
Zero-knowledge proof schemes can be constructed from various cryptographic primitives, such ashash-based cryptography,pairing-based cryptography,multi-party computation, orlattice-based cryptography.
Research in zero-knowledge proofs has been motivated byauthenticationsystems where one party wants to prove its identity to a second party via some secret information (such as a password) but does not want the second party to learn anything about this secret. This is called a "zero-knowledgeproof of knowledge". However, a password is typically too small or insufficiently random to be used in many schemes for zero-knowledge proofs of knowledge. Azero-knowledge password proofis a special kind of zero-knowledge proof of knowledge that addresses the limited size of passwords.[citation needed]
In April 2015, the one-out-of-many proofs protocol (aSigma protocol) was introduced.[14]In August 2021,Cloudflare, an American web infrastructure and security company, decided to use the one-out-of-many proofs mechanism for private web verification using vendor hardware.[15]
One of the uses of zero-knowledge proofs within cryptographic protocols is to enforce honest behavior while maintaining privacy. Roughly, the idea is to force a user to prove, using a zero-knowledge proof, that its behavior is correct according to the protocol.[16][17]Because of soundness, we know that the user must really act honestly in order to be able to provide a valid proof. Because of zero knowledge, we know that the user does not compromise the privacy of its secrets in the process of providing the proof.[citation needed]
In 2016, thePrinceton Plasma Physics LaboratoryandPrinceton Universitydemonstrated a technique that may have applicability to futurenuclear disarmamenttalks. It would allow inspectors to confirm whether or not an object is indeed a nuclear weapon without recording, sharing, or revealing the internal workings, which might be secret.[18]
Zero-knowledge proofs were applied in theZerocoinand Zerocash protocols, which culminated in the birth ofZcoin[19](later rebranded asFiroin 2020)[20]andZcashcryptocurrencies in 2016. Zerocoin has a built-in mixing model that does not trust any peers or centralised mixing providers to ensure anonymity.[19]Users can transact in a base currency and can cycle the currency into and out of Zerocoins.[21]The Zerocash protocol uses a similar model (a variant known as anon-interactive zero-knowledge proof)[22]except that it can obscure the transaction amount, while Zerocoin cannot. Given significant restrictions of transaction data on the Zerocash network, Zerocash is less prone to privacy timing attacks when compared to Zerocoin. However, this additional layer of privacy can cause potentially undetected hyperinflation of Zerocash supply because fraudulent coins cannot be tracked.[19][23]
In 2018, Bulletproofs were introduced. Bulletproofs are an improvement from non-interactive zero-knowledge proofs where a trusted setup is not needed.[24]It was later implemented into theMimblewimbleprotocol (which the Grin and Beam cryptocurrencies are based upon) andMonero cryptocurrency.[25]In 2019, Firo implemented the Sigma protocol, which is an improvement on the Zerocoin protocol without trusted setup.[26][14]In the same year, Firo introduced the Lelantus protocol, an improvement on the Sigma protocol, where the former hides the origin and amount of a transaction.[27]
Zero-knowledge proofs by their nature can enhance privacy in identity-sharing systems, which are vulnerable to data breaches and identity theft. When integrated to adecentralized identifiersystem, ZKPs add an extra layer of encryption on DID documents.[28]
Zero-knowledge proofs were first conceived in 1985 byShafi Goldwasser,Silvio Micali, andCharles Rackoffin their paper "The Knowledge Complexity of Interactive Proof-Systems".[16]This paper introduced the IP hierarchy of interactive proof systems (seeinteractive proof system) and conceived the concept ofknowledge complexity, a measurement of the amount of knowledge about the proof transferred from the prover to the verifier. They also gave the first zero-knowledge proof for a concrete problem, that of decidingquadratic nonresiduesmodm. Together with a paper byLászló BabaiandShlomo Moran, this landmark paper invented interactive proof systems, for which all five authors won the firstGödel Prizein 1993.
In their own words, Goldwasser, Micali, and Rackoff say:
Of particular interest is the case where this additional knowledge is essentially 0 and we show that [it] is possible to interactively prove that a number is quadratic non residue modmreleasing 0 additional knowledge. This is surprising as no efficient algorithm for deciding quadratic residuosity modmis known whenm’s factorization is not given. Moreover, all knownNPproofs for this problem exhibit the prime factorization ofm. This indicates that adding interaction to the proving process, may decrease the amount of knowledge that must be communicated in order to prove a theorem.
The quadratic nonresidue problem has both anNPand aco-NPalgorithm, and so lies in the intersection of NP and co-NP. This was also true of several other problems for which zero-knowledge proofs were subsequently discovered, such as an unpublished proof system by Oded Goldreich verifying that a two-prime modulus is not aBlum integer.[29]
Oded Goldreich,Silvio Micali, andAvi Wigdersontook this one step further, showing that, assuming the existence of unbreakable encryption, one can create a zero-knowledge proof system for the NP-completegraph coloring problemwith three colors. Since every problem in NP can be efficiently reduced to this problem, this means that, under this assumption, all problems in NP have zero-knowledge proofs.[30]The reason for the assumption is that, as in the above example, their protocols require encryption. A commonly cited sufficient condition for the existence of unbreakable encryption is the existence ofone-way functions, but it is conceivable that some physical means might also achieve it.
On top of this, they also showed that thegraph nonisomorphism problem, thecomplementof thegraph isomorphism problem, has a zero-knowledge proof. This problem is in co-NP, but is not currently known to be in either NP or any practical class. More generally,Russell ImpagliazzoandMoti Yungas well as Ben-Or et al. would go on to show that, also assuming one-way functions or unbreakable encryption, there are zero-knowledge proofs forallproblems in IP = PSPACE, or in other words, anything that can be proved by an interactive proof system can be proved with zero knowledge.[31][32]
Not liking to make unnecessary assumptions, many theorists sought a way to eliminate the necessity ofone way functions. One way this was done was withmulti-prover interactive proof systems(seeinteractive proof system), which have multiple independent provers instead of only one, allowing the verifier to "cross-examine" the provers in isolation to avoid being misled. It can be shown that, without any intractability assumptions, all languages in NP have zero-knowledge proofs in such a system.[33]
It turns out that, in an Internet-like setting, where multiple protocols may be executed concurrently, building zero-knowledge proofs is more challenging. The line of research investigating concurrent zero-knowledge proofs was initiated by the work ofDwork,Naor, andSahai.[34]One particular development along these lines has been the development ofwitness-indistinguishable proofprotocols. The property of witness-indistinguishability is related to that of zero-knowledge, yet witness-indistinguishable protocols do not suffer from the same problems of concurrent execution.[35]
Another variant of zero-knowledge proofs arenon-interactive zero-knowledge proofs. Blum, Feldman, and Micali showed that a common random string shared between the prover and the verifier is enough to achieve computational zero-knowledge without requiring interaction.[5][6]
The most popular interactive ornon-interactive zero-knowledge proof(e.g., zk-SNARK) protocols can be broadly categorized in the following four categories: Succinct Non-Interactive ARguments of Knowledge (SNARK), Scalable Transparent ARgument of Knowledge (STARK), Verifiable Polynomial Delegation (VPD), and Succinct Non-interactive ARGuments (SNARG). A list of zero-knowledge proof protocols and libraries is provided below along with comparisons based ontransparency,universality,plausible post-quantum security, andprogramming paradigm.[36]A transparent protocol is one that does not require any trusted setup and uses public randomness. A universal protocol is one that does not require a separate trusted setup for each circuit. Finally, a plausibly post-quantum protocol is one that is not susceptible to known attacks involving quantum algorithms.
While zero-knowledge proofs offer a secure way to verify information, the arithmetic circuits that implement them must be carefully designed. If these circuits lack sufficient constraints, they may introduce subtle yet critical security vulnerabilities.
One of the most common classes of vulnerabilities in these systems is under-constrained logic, where insufficient constraints allow a malicious prover to produce a proof for an incorrect statement that still passes verification. A 2024 systematization of known attacks found that approximately 96% of documented circuit-layer bugs in SNARK-based systems were due to under-constrained circuits.[56]
These vulnerabilities often arise during the translation of high-level logic into low-level constraint systems, particularly when using domain-specific languages such as Circom or Gnark. Recent research has demonstrated that formally proving determinism – ensuring that a circuit's outputs are uniquely determined by its inputs – can eliminate entire classes of these vulnerabilities.[57]
|
https://en.wikipedia.org/wiki/Zero-knowledge_proof#Zero-Knowledge_Proof_protocols
|
Inengineering, afail-safeis a design feature or practice that, in the event of afailureof the design feature, inherently responds in a way that will cause minimal or no harm to other equipment, to the environment or to people. Unlikeinherent safetyto a particular hazard, a system being "fail-safe" does not mean that failure is naturally inconsequential, but rather that the system's design prevents or mitigates unsafe consequences of the system's failure. If and when a "fail-safe" system fails, it remains at least as safe as it was before the failure.[1][2]Since many types of failure are possible,failure mode and effects analysisis used to examine failure situations and recommend safety design and procedures.[3]
Some systems can never be made fail-safe, as continuous availability is needed.Redundancy,fault tolerance, orcontingency plansare used for these situations (e.g. multiple independently controlled and fuel-fed engines).[4]
Examples include:
Examples include:
As well as physical devices and systems fail-safe procedures can be created so that if a procedure is not carried out or carried out incorrectly no dangerous action results.
For example:
Fail-safe (foolproof) devices are also known aspoka-yokedevices.Poka-yoke, aJapaneseterm, was coined byShigeo Shingo, a quality expert.[11][12]"Safe to fail" refers to civil engineering designs such as theRoom for the River project in Netherlandsand the Thames Estuary 2100 Plan[13][14]which incorporate flexible adaptation strategies orclimate change adaptationwhich provide for, and limit, damage, should severe events such as 500-year floods occur.[15]
Fail-safeandfail-secureare distinct concepts.Fail-safemeans that a device will not endanger lives or property when it fails.Fail-secure,also calledfail-closed,means that access or data will not fall into the wrong hands in a security failure. Sometimes the approaches suggest opposite solutions. For example, if a building catches fire, fail-safe systems would unlock doors to ensure quick escape and allow firefighters inside, while fail-secure would lock doors to prevent unauthorized access to the building.
The opposite offail-closedis calledfail-open.
Fail active operational can be installed on systems that have a high degree of redundancy so that a single failure of any part of the system can be tolerated (fail active operational) and a second failure can be detected – at which point the system will turn itself off (uncouple, fail passive). One way of accomplishing this is to have three identical systems installed, and a control logic which detects discrepancies. An example for this are many aircraft systems, among theminertial navigation systemsandpitot tubes.
During theCold War, "failsafe point" was the term used for the point of no return for AmericanStrategic Air Commandnuclear bombers, just outside Soviet airspace. In the event of receiving an attack order, the bombers were required to linger at the failsafe point and wait for a second confirming order; until one was received, they would not arm their bombs or proceed further.[16]The design was to prevent any single failure of the American command system causing nuclear war. This sense of the term entered the American popular lexicon with the publishing of the 1962 novelFail-Safe.
(Other nuclear war command control systems have used the opposite scheme,fail-deadly, which requires continuous or regular proof that an enemy first-strike attack hasnotoccurred topreventthe launching of a nuclear strike.)
|
https://en.wikipedia.org/wiki/Fail-safe
|
Quantum volumeis a metric that measures the capabilities and error rates of aquantum computer. It expresses the maximum size of squarequantum circuitsthat can be implemented successfully by the computer. The form of the circuits is independent from the quantum computer architecture, but compiler can transform and optimize it to take advantage of the computer's features. Thus, quantum volumes for different architectures can be compared.
Quantum computers are difficult to compare. Quantum volume is a single number designed to show all around performance. It is a measurement and not a calculation, and takes into account several features of a quantum computer, starting with its number ofqubits—other measures used are gate and measurement errors,crosstalkand connectivity.[1][2][3]
IBM defined its Quantum Volume metric[4]because a classical computer's transistor count and a quantum computer's quantum bit count aren't the same. Qubits decohere with a resulting loss of performance so a few fault tolerant bits are more valuable as a performance measure than a larger number of noisy, error-prone qubits.[5][6]
Generally, the larger the quantum volume, the more complex the problems a quantum computer can solve.[7]
Alternative benchmarks, such asCross-entropy benchmarking, reliable Quantum Operations per Second (rQOPS) proposed byMicrosoft, Circuit Layer Operations Per Second (CLOPS) proposed by IBM andIonQ's Algorithmic Qubits, have also been proposed.[8][9]
The quantum volume of a quantum computer was originally defined in 2018 by Nikolaj Mollet al.[10]However, since around 2021 that definition has been supplanted by IBM's 2019redefinition.[11][12]The original definition depends on the number of qubitsNas well as the number of steps that can be executed, the circuit depthd
The circuit depth depends on the effective error rateεeffas
The effective error rateεeffis defined as the average error rate of a two-qubit gate. If the physical two-qubit gates do not have all-to-all connectivity, additionalSWAPgates may be needed to implement an arbitrary two-qubit gate andεeff>ε, whereεis the error rate of the physical two-qubit gates. If more complex hardware gates are available, such as the three-qubitToffoli gate, it is possible thatεeff<ε.
The allowable circuit depth decreases when more qubits with the same effective error rate are added. So with these definitions, as soon asd(N) <N, the quantum volume goes down if more qubits are added. To run an algorithm that only requiresn<Nqubits on anN-qubit machine, it could be beneficial to select a subset of qubits with good connectivity. For this case, Mollet al.[10]give a refined definition of quantum volume.
where the maximum is taken over an arbitrary choice ofnqubits.
In 2019, IBM's researchers modified the quantum volume definition to be an exponential of the circuit size, stating that it corresponds to the complexity of simulating the circuit on a classical computer:[4][13]
The world record, as of May 2025[update], for the highest quantum volume is 223.[14]Here is an overview of historically achieved quantum volumes:
The quantum volume benchmark defines a family ofsquarecircuits, whose number of qubitsNand depthdare the same. Therefore, the output of this benchmark is a single number. However, a proposed generalization is the volumetric benchmark[34]framework, which defines a family ofrectangularquantum circuits, for whichNanddare uncoupled to allow the study of time/space performance trade-offs, thereby sacrificing the simplicity of a single-figure benchmark.
Volumetric benchmarks can be generalized not only to account for uncoupledNandddimensions, but also to test different types of quantum circuits. While quantum volume benchmarks the quantum computer's ability to implement a specific type ofrandomized circuits, these can, in principle, be substituted by other families of random circuits, periodic circuits,[35]or algorithm-inspired circuits. Each benchmark must have a success criterion that defines whether a processor has "passed" a given test circuit.
While these data can be analyzed in many ways, a simple method of visualization is illustrating thePareto frontof theNversusdtrade-off for the processor being benchmarked. This Pareto front provides information on the largest depthda patch of a given number of qubitsNcan withstand, or, alternatively, the biggest patch ofNqubits that can withstand executing a circuit of given depthd.
|
https://en.wikipedia.org/wiki/Quantum_volume
|
NESSIE(New European Schemes for Signatures, Integrity and Encryption) was aEuropeanresearch project funded from 2000 to 2003 to identify securecryptographicprimitives. The project was comparable to theNISTAES processand the Japanese Government-sponsoredCRYPTRECproject, but with notable differences from both. In particular, there is both overlap and disagreement between the selections and recommendations from NESSIE and CRYPTREC (as of the August 2003 draft report). The NESSIE participants include some of the foremost activecryptographersin the world, as does the CRYPTREC project.
NESSIE was intended to identify and evaluate quality cryptographic designs in several categories, and to that end issued a public call for submissions in March 2000. Forty-two were received, and in February 2003 twelve of the submissions were selected. In addition, five algorithms already publicly known, but not explicitly submitted to the project, were chosen as "selectees". The project has publicly announced that "no weaknesses were found in the selected designs".
The selected algorithms and their submitters or developers are listed below. The five already publicly known, but not formally submitted to the project, are marked with a "*". Most may be used by anyone for any purpose without needing to seek a patent license from anyone; a license agreement is needed for those marked with a "#", but the licensors of those have committed to "reasonable non-discriminatory license terms for all interested", according to a NESSIE project press release.
None of the sixstream cipherssubmitted to NESSIE were selected because every one fell tocryptanalysis. This surprising result led to theeSTREAMproject.
Entrants that did not get past the first stage of the contest includeNoekeon,Q,Nimbus,NUSH,Grand Cru,Anubis,Hierocrypt,SC2000, andLILI-128.
The contractors and their representatives in the project were:
|
https://en.wikipedia.org/wiki/NESSIE
|
Adecision-to-decision path, orDD-path, is a path of execution (usually through a flow graph representing a program, such as aflow chart) between two decisions. More recent versions of the concept also include the decisions themselves in their own DD-paths.
In Huang's 1975 paper,[1]a decision-to-decision path is defined aspathin a program'sflowchartsuch that all the following hold (quoting from the paper):
Jorgensen's more recent textbooks restate it in terms of a program'sflow graph(called a "program graph" in that textbook).[2]First define some preliminary notions: chain and a maximal chain. A chain is defined as a path in which:
A maximal chain is a chain that is not part of a bigger chain.
A DD-path is a set of nodes in a program graph such that one of the following holds (quoting and keeping Jorgensen's numbering, with comments added in parentheses):[2]
According to Jorgensen (2013), in Great Britain andISTQBliterature, the same notion is calledlinear code sequence and jump(LCSAJ).[2][dubious–discuss]
From the latter definition (of Jorgensen) we can conclude the following:
According to Jorgensen's 2013 textbook, DD-path testing is the best known code-based testing method, incorporated in numerous commercial tools.[2]
DD-path testing is also called C2 testing orbranch coverage.[3][4]
Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Decision-to-decision_path
|
Achild prodigyis, technically, a child under the age of 10 who produces meaningful work in some domain at the level of an adult expert.[1][2][3]The term is also applied more broadly to describe young people who are extraordinarily talented in some field.[4]
The termwunderkind(from GermanWunderkind; literally "wonder child") is sometimes used as a synonym for child prodigy, particularly in media accounts.Wunderkindalso is used to recognise those who achieve success and acclaim early in their adult careers.[5]
Generally, prodigies in all domains are suggested to have relatively elevatedIQ, extraordinary memory, and exceptional attention to detail. Significantly, while math and physics prodigies may have higher IQs, this may be an impediment to art prodigies.[6]
K. Anders Ericssonemphasised the contribution of deliberate practice over their innate talent to prodigies' exceptional performance in chess.[7]The deliberate practice is energy-consuming and requires attention to correct mistakes. As prodigies start formal chess training early with intense dedication to deliberate practice, they may accumulate enough deliberate practice for their exceptional performance. Therefore, this framework provide an arguably reasonable justification for chess prodigies. However, similar amounts of practice also make children differ in their achievements because of other factors such as the quality of deliberate practice, and their interests in chess.
Chess prodigies may have higherIQsthan normal children. This positive link between chess skills of prodigies and intelligence is particularly significant on the “performance intelligence”, regarding fluid reasoning, spatial processing, attentiveness to details, and visual-motor integration, while least significant on the “verbal intelligence”, regarding the ability to understand and reason using concepts framed in words.[8]However, this positive link is absent among adult experts. Remarkably, in the sample of chess prodigies, the more intelligent children played chess worse. This is considered as the result of less practice time of more intelligent chess skills.
Practice-plasticity-processes (PPP) model was proposed to explain the existence of chess prodigies by integrating the practice extreme and innate talent extreme theories. Besides deliberate practice,neuroplasticityis identified as another critical component for developing chess heuristics (e.g., simple search techniques and abstract rules like “occupy the centre”),chunks(e.g., group of pieces locating in specific squares), and templates (e.g., familiarised complex patterns of chunks), which are essential for chess skills. The more plastic the brain is, the easier it is for them to acquire chunks, templates, and heuristics for better performance. On the other hand, inherited individual differences in the brain are circumscribed children to learn these skills.[9]
Music prodigies usually express their talents in exceptional performance or composition.
The Multifactorial Gene-Environment Interaction Model incorporates the roles of adequate practice, certain personality traits, elevated IQ, and exceptional working memory in the explanation of music prodigies.[10]A study comparing current and former prodigies with normal people and musicians who showed their talents or were trained later in life to test this model. It found prodigies neither have exceptional performance in terms of IQ, working memory, nor specific personality. This study also emphasises the significance of frequent practice early in life, when the brain is moreplastic. Besides the quality of practice, and the parental investment, the experience offlowduring the practice is important for efficient and adequate practice for music prodigies. Practice demands high levels of concentration, which is hard for children in general, but flow can provide inherent pleasures of the practice to ensure this focused work.[11]
PET scansperformed on several mathematics prodigies have suggested that they think in terms of[clarification needed]long-term working memory (LTWM).[12]Thismemory, specific to a field of expertise,[clarification needed]is capable of holding relevant information for extended periods, usually hours. For example, experienced waiters have been found to hold the orders of up to twenty customers in their heads while they serve them, but perform only as well as an average person in number-sequence recognition. The PET scans also answer questions about which specific areas of the brain associate themselves with manipulating numbers.[12]
One subject[who?]never excelled as a child in mathematics, but he taught himself algorithms and tricks for calculatory speed, becoming capable of extremely complex mental math. His brain, compared to six other controls, was studied using the PET scan, revealing separate areas of his brain that he manipulated to solve complex problems. Some of the areas that he and presumably prodigies use are brain sectors dealing in visual and spatial memory, as well as visualmental imagery. Other areas of the brain showed use by the subject, including a sector of the brain generally related to childlike "finger counting", probably used in his mind to relate numbers to thevisual cortex.[12]
This finding is consistent with the introspective report of this[which?]calculating prodigy, which states that he used visual images to encode and retrieve numerical information in LTWM. Compared toshort-term memorystrategies, used by normal people on complex mathematical problems, encoding and retrievalepisodic memorystrategies would be more efficient. The prodigy may switch between these two strategies, which reduce the storage retrieval times of long-term memory and circumvent the limited capacities of short-term memory. In turn, they can encode and retrieve specific information (e.g., the intermediate answers during the calculation) in the long-term working memory more accurately and effectively.[13]
Similar strategies were found among prodigies masteringmental abacus calculation. The positions of beads on the physicalabacusact as visual proxies of each digit for prodigies to solve complex computations. This one-to-one corresponding structure allows them to rapidly encode and retrieve digits in the long-term working memory during the calculation.[14]ThefMRIscans showed stronger activation of brain areas related to visual processing for Chinese children being trained with abacus mental compared to control groups. This may indicate a greater demand for visuospatial information processing and visual-motor imagination in abacus mental calculation. Additionally, the right middle frontal gyrus activation is suggested to be the neuroanatomical link between prodigies' abacus mental calculation and the visuospatial working memory. This activation serves a mediation effect on the correlation between abacus-based mental calculation andvisuospatial working memory. A training-inducedneuroplasticityregarding working memory performance for children is proposed.[15]A study examining German calculating prodigies also proposed a similar reason for exceptional calculation abilities. Excellent working memory capacities and neuroplastic changes brought by extensive practice would be essential to enhance this domain-specific skill.[16]
"My mother said that I should finish high school and go to college first."
Noting that thecerebellumacts to streamline the speed and efficiency of all thought processes, Vandervert[18]explained the abilities of prodigies in terms of the collaboration ofworking memoryand the cognitive functions of the cerebellum. Citing extensive imaging evidence, Vandervert first proposed this approach in two publications which appeared in 2003. In addition to imaging evidence, Vandervert's approach is supported by the substantial award-winning studies of the cerebellum by Masao Ito.[19]
Vandervert[20]provided extensive argument that, in the prodigy, the transition from visual-spatial working memory to other forms of thought (language, art, mathematics) is accelerated by the unique emotional disposition of the prodigy and the cognitive functions of the cerebellum. According to Vandervert, in the emotion-driven prodigy (commonly observed as a "rage to master") the cerebellum accelerates the streamlining of the efficiencies of working memory in its manipulation and decomposition/re-composition of visual-spatial content intolanguage acquisitionand into linguistic, mathematical, and artistic precocity.[21]
Essentially, Vandervert has argued that when a child is confronted with a challenging new situation, visual-spatial working memory and speech-related and other notational system-related working memory are decomposed and re-composed (fractionated) by the cerebellum and thenblendedin the cerebral cortex in an attempt to deal with the new situation.[22]In child prodigies, Vandervert believes this blending process isaccelerateddue to their unique emotional sensitivities which result in high levels of repetitious focus on, in most cases, particularrule-governedknowledge domains. He has also argued that child prodigies first began to appear about 10,000 years ago when rule-governed knowledge had accumulated to a significant point, perhaps at the agricultural-religious settlements ofGöbekli TepeorCyprus.[23]
Some researchers believe that prodigious talent tends to arise as a result of the innate talent of the child, and the energetic and emotional investment that the child ventures. Others believe that the environment plays the dominant role, many times in obvious ways. For example,László Polgárset out to raise his children to be chess players, and all three of his daughters went on to become world-class players (two of whom aregrandmasters), emphasising the potency a child's environment can have in determining the pursuits toward which a child's energy will be directed, and showing that an incredible amount of skill can be developed through suitable training.[24]
Co-incidence theory explains the development of prodigies with a continuum of the discussion of nature and nurture. This theory states that the integrative of various factors in the development and expression of human potential, including:[25]
Prodigiousness in childhood is not always maintained into adulthood. Some researchers have found that gifted children fall behind due to lack of effort. Jim Taylor, professor at the University of San Francisco, theorizes that this is because gifted children experience success at an early age with little to no effort and may not develop a sense of ownership of success. Therefore, these children might not develop a connection between effort and outcome. Some children might also believe that they can succeed without effort in the future as well. Dr. Anders Ericcson, professor at Florida State University, researches expert performance in sports, music, mathematics, and other activities. His findings demonstrate that prodigiousness in childhood is not a strong indicator of later success. Rather, the number of hours devoted to the activity was a better indicator.[26]
Rosemary Callard-Szulgit and other educators have written extensively about the problem of perfectionism in bright children, calling it their "number one social-emotional trait". Gifted children often associate even slight imperfection with failure, so that they become fearful of effort, even in their personal lives, and in extreme cases end up virtually immobilized.[27]
Prodigies have been found with the over-representation of relatives with autism on their family pedigrees. Autism traits on theAutism-spectrum quotient(AQ) were reported in both first-degree relatives of child prodigies and of autism, which was higher than normal prevalence.[28]
Some autistic traits can be found among prodigies. Firstly, the social function of arithmetic prodigies may be weaker because of larger activation in certain brain areas enhancing their arithmetic performance, which is also essential for social and emotional functions (i.e., precuneus, lingual and fusiform gyrus). Theseneuroplasticchanges in neural networks may modulate their social performances in terms of emotional face processing and emotional evaluation of complex social interactions. Nevertheless, this emotional or social modulation must not score at psychopathological levels.[16]Additionally, the attentiveness to details, a typical characteristic of AQ, is enhanced among prodigies compared to normal people, even those withAsperger syndrome.[6]
|
https://en.wikipedia.org/wiki/Child_prodigy
|
Inmathematics, areal intervalis thesetof allreal numberslying between two fixed endpoints with no "gaps". Each endpoint is either a real number or positive or negativeinfinity, indicating the interval extends without abound. A real interval can contain neither endpoint, either endpoint, or both endpoints, excluding any endpoint which is infinite.
For example, the set of real numbers consisting of0,1, and all numbers in between is an interval, denoted[0, 1]and called theunit interval; the set of allpositive real numbersis an interval, denoted(0, ∞); the set of all real numbers is an interval, denoted(−∞, ∞); and any single real numberais an interval, denoted[a,a].
Intervals are ubiquitous inmathematical analysis. For example, they occur implicitly in theepsilon-delta definition of continuity; theintermediate value theoremasserts that the image of an interval by acontinuous functionis an interval;integralsofreal functionsare defined over an interval; etc.
Interval arithmeticconsists of computing with intervals instead of real numbers for providing a guaranteed enclosure of the result of a numerical computation, even in the presence of uncertainties ofinput dataandrounding errors.
Intervals are likewise defined on an arbitrarytotally orderedset, such asintegersorrational numbers. The notation of integer intervals is consideredin the special section below.
Anintervalis asubsetof thereal numbersthat contains all real numbers lying between any two numbers of the subset. In particular, theempty set∅{\displaystyle \varnothing }and the entire set of real numbersR{\displaystyle \mathbb {R} }are both intervals.
Theendpointsof an interval are itssupremum, and itsinfimum, if they exist as real numbers.[1]If the infimum does not exist, one says often that the corresponding endpoint is−∞.{\displaystyle -\infty .}Similarly, if the supremum does not exist, one says that the corresponding endpoint is+∞.{\displaystyle +\infty .}
Intervals are completely determined by their endpoints and whether each endpoint belong to the interval. This is a consequence of theleast-upper-bound propertyof the real numbers. This characterization is used to specify intervals by mean ofinterval notation, which is described below.
Anopen intervaldoes not include any endpoint, and is indicated with parentheses.[2]For example,(0,1)={x∣0<x<1}{\displaystyle (0,1)=\{x\mid 0<x<1\}}is the interval of all real numbers greater than0and less than1. (This interval can also be denoted by]0, 1[, see below). The open interval(0, +∞)consists of real numbers greater than0, i.e., positive real numbers. The open intervals have thus one of the forms
wherea{\displaystyle a}andb{\displaystyle b}are real numbers such thata<b.{\displaystyle a<b.}In the last case, the resulting interval is theempty setand does not depend ona{\displaystyle a}. The open intervals are those intervals that areopen setsfor the usualtopologyon the real numbers.
Aclosed intervalis an interval that includes all its endpoints and is denoted with square brackets.[2]For example,[0, 1]means greater than or equal to0and less than or equal to1. Closed intervals have one of the following forms in whichaandbare real numbers such thata<b:{\displaystyle a<b\colon }
The closed intervals are those intervals that areclosed setsfor the usualtopologyon the real numbers.
Ahalf-open intervalhas two endpoints and includes only one of them. It is saidleft-openorright-opendepending on whether the excluded endpoint is on the left or on the right. These intervals are denoted by mixing notations for open and closed intervals.[3]For example,(0, 1]means greater than0and less than or equal to1, while[0, 1)means greater than or equal to0and less than1. The half-open intervals have the form
In summary, a set of the real numbers is an interval, if and only if it is an open interval, a closed interval, or a half-open interval. The only intervals that appear twice in the above classification are∅{\displaystyle \emptyset }andR{\displaystyle \mathbb {R} }that are both open and closed.[4][5]
Adegenerate intervalis anyset consisting of a single real number(i.e., an interval of the form[a,a]).[6]Some authors include the empty set in this definition. A real interval that is neither empty nor degenerate is said to beproper, and has infinitely many elements.
An interval is said to beleft-boundedorright-bounded, if there is some real number that is, respectively, smaller than or larger than all its elements. An interval is said to bebounded, if it is both left- and right-bounded; and is said to beunboundedotherwise. Intervals that are bounded at only one end are said to behalf-bounded. The empty set is bounded, and the set of all reals is the only interval that is unbounded at both ends. Bounded intervals are also commonly known asfinite intervals.
Bounded intervals arebounded sets, in the sense that theirdiameter(which is equal to theabsolute differencebetween the endpoints) is finite. The diameter may be called thelength,width,measure,range, orsizeof the interval. The size of unbounded intervals is usually defined as+∞, and the size of the empty interval may be defined as0(or left undefined).
Thecentre(midpoint) of a bounded interval with endpointsaandbis(a+b)/2, and itsradiusis the half-length|a−b|/2. These concepts are undefined for empty or unbounded intervals.
An interval is said to beleft-openif and only if it contains nominimum(an element that is smaller than all other elements);right-openif it contains nomaximum; andopenif it contains neither. The interval[0, 1)= {x| 0 ≤x< 1}, for example, is left-closed and right-open. The empty set and the set of all reals are both open and closed intervals, while the set of non-negative reals, is a closed interval that is right-open but not left-open. The open intervals areopen setsof the real line in its standardtopology, and form abaseof the open sets.
An interval is said to beleft-closedif it has a minimum element or is left-unbounded,right-closedif it has a maximum or is right unbounded; it is simplyclosedif it is both left-closed and right closed. So, the closed intervals coincide with theclosed setsin that topology.
Theinteriorof an intervalIis the largest open interval that is contained inI; it is also the set of points inIwhich are not endpoints ofI. TheclosureofIis the smallest closed interval that containsI; which is also the setIaugmented with its finite endpoints.
For any setXof real numbers, theinterval enclosureorinterval spanofXis the unique interval that containsX, and does not properly contain any other interval that also containsX.
An intervalIis asubintervalof intervalJifIis asubsetofJ. An intervalIis aproper subintervalofJifIis aproper subsetofJ.
However, there is conflicting terminology for the termssegmentandinterval, which have been employed in the literature in two essentially opposite ways, resulting in ambiguity when these terms are used. TheEncyclopedia of Mathematics[7]definesinterval(without a qualifier) to exclude both endpoints (i.e., open interval) andsegmentto include both endpoints (i.e., closed interval), while Rudin'sPrinciples of Mathematical Analysis[8]calls sets of the form [a,b]intervalsand sets of the form (a,b)segmentsthroughout. These terms tend to appear in older works; modern texts increasingly favor the terminterval(qualified byopen,closed, orhalf-open), regardless of whether endpoints are included.
The interval of numbers betweenaandb, includingaandb, is often denoted[a,b]. The two numbers are called theendpointsof the interval. In countries where numbers are written with adecimal comma, asemicolonmay be used as a separator to avoid ambiguity.
To indicate that one of the endpoints is to be excluded from the set, the corresponding square bracket can be either replaced with a parenthesis, or reversed. Both notations are described inInternational standardISO 31-11. Thus, inset builder notation,
Each interval(a,a),[a,a), and(a,a]represents theempty set, whereas[a,a]denotes the singleton set{a}. Whena>b, all four notations are usually taken to represent the empty set.
Both notations may overlap with other uses of parentheses and brackets in mathematics. For instance, the notation(a,b)is often used to denote anordered pairin set theory, thecoordinatesof apointorvectorinanalytic geometryandlinear algebra, or (sometimes) acomplex numberinalgebra. That is whyBourbakiintroduced the notation]a,b[to denote the open interval.[9]The notation[a,b]too is occasionally used for ordered pairs, especially incomputer science.
Some authors such as Yves Tillé use]a,b[to denote the complement of the interval(a,b); namely, the set of all real numbers that are either less than or equal toa, or greater than or equal tob.
In some contexts, an interval may be defined as a subset of theextended real numbers, the set of all real numbers augmented with−∞and+∞.
In this interpretation, the notations[−∞,b],(−∞,b],[a, +∞], and[a, +∞)are all meaningful and distinct. In particular,(−∞, +∞)denotes the set of all ordinary real numbers, while[−∞, +∞]denotes the extended reals.
Even in the context of the ordinary reals, one may use aninfiniteendpoint to indicate that there is no bound in that direction. For example,(0, +∞)is the set ofpositive real numbers, also written asR+.{\displaystyle \mathbb {R} _{+}.}The context affects some of the above definitions and terminology. For instance, the interval(−∞, +∞)=R{\displaystyle \mathbb {R} }is closed in the realm of ordinary reals, but not in the realm of the extended reals.
Whenaandbareintegers, the notation ⟦a, b⟧, or[a..b]or{a..b}or justa..b, is sometimes used to indicate the interval of allintegersbetweenaandbincluded. The notation[a..b]is used in someprogramming languages; inPascal, for example, it is used to formally define a subrange type, most frequently used to specify lower and upper bounds of validindicesof anarray.
Another way to interpret integer intervals are assets defined by enumeration, usingellipsisnotation.
An integer interval that has a finite lower or upper endpoint always includes that endpoint. Therefore, the exclusion of endpoints can be explicitly denoted by writinga..b− 1,a+ 1 ..b, ora+ 1 ..b− 1. Alternate-bracket notations like[a..b)or[a..b[are rarely used for integer intervals.[citation needed]
The intervals are precisely theconnectedsubsets ofR.{\displaystyle \mathbb {R} .}It follows that the image of an interval by anycontinuous functionfromR{\displaystyle \mathbb {R} }toR{\displaystyle \mathbb {R} }is also an interval. This is one formulation of theintermediate value theorem.
The intervals are also theconvex subsetsofR.{\displaystyle \mathbb {R} .}The interval enclosure of a subsetX⊆R{\displaystyle X\subseteq \mathbb {R} }is also theconvex hullofX.{\displaystyle X.}
Theclosureof an interval is the union of the interval and the set of its finite endpoints, and hence is also an interval. (The latter also follows from the fact that the closure of everyconnected subsetof atopological spaceis a connected subset.) In other words, we have[10]
The intersection of any collection of intervals is always an interval. The union of two intervals is an interval if and only if they have a non-empty intersection or an open end-point of one interval is a closed end-point of the other, for example(a,b)∪[b,c]=(a,c].{\displaystyle (a,b)\cup [b,c]=(a,c].}
IfR{\displaystyle \mathbb {R} }is viewed as ametric space, itsopen ballsare the open bounded intervals(c+r,c−r), and itsclosed ballsare the closed bounded intervals[c+r,c−r]. In particular, themetricandordertopologies in the real line coincide, which is the standard topology of the real line.
Any elementxof an intervalIdefines a partition ofIinto three disjoint intervalsI1,I2,I3: respectively, the elements ofIthat are less thanx, the singleton[x,x]={x},{\displaystyle [x,x]=\{x\},}and the elements that are greater thanx. The partsI1andI3are both non-empty (and have non-empty interiors), if and only ifxis in the interior ofI. This is an interval version of thetrichotomy principle.
Adyadic intervalis a bounded real interval whose endpoints arej2n{\displaystyle {\tfrac {j}{2^{n}}}}andj+12n,{\displaystyle {\tfrac {j+1}{2^{n}}},}wherej{\displaystyle j}andn{\displaystyle n}are integers. Depending on the context, either endpoint may or may not be included in the interval.
Dyadic intervals have the following properties:
The dyadic intervals consequently have a structure that reflects that of an infinitebinary tree.
Dyadic intervals are relevant to several areas of numerical analysis, includingadaptive mesh refinement,multigrid methodsandwavelet analysis. Another way to represent such a structure isp-adic analysis(forp= 2).[11]
An open finite interval(a,b){\displaystyle (a,b)}is a 1-dimensional openballwith acenterat12(a+b){\displaystyle {\tfrac {1}{2}}(a+b)}and aradiusof12(b−a).{\displaystyle {\tfrac {1}{2}}(b-a).}The closed finite interval[a,b]{\displaystyle [a,b]}is the corresponding closed ball, and the interval's two endpoints{a,b}{\displaystyle \{a,b\}}form a 0-dimensionalsphere. Generalized ton{\displaystyle n}-dimensionalEuclidean space, a ball is the set of points whose distance from the center is less than the radius. In the 2-dimensional case, a ball is called adisk.
If ahalf-spaceis taken as a kind ofdegenerateball (without a well-defined center or radius), a half-space can be taken as analogous to a half-bounded interval, with its boundary plane as the (degenerate) sphere corresponding to the finite endpoint.
A finite interval is (the interior of) a 1-dimensionalhyperrectangle. Generalized toreal coordinate spaceRn,{\displaystyle \mathbb {R} ^{n},}anaxis-alignedhyperrectangle (or box) is theCartesian productofn{\displaystyle n}finite intervals. Forn=2{\displaystyle n=2}this is arectangle; forn=3{\displaystyle n=3}this is arectangular cuboid(also called a "box").
Allowing for a mix of open, closed, and infinite endpoints, the Cartesian product of anyn{\displaystyle n}intervals,I=I1×I2×⋯×In{\displaystyle I=I_{1}\times I_{2}\times \cdots \times I_{n}}is sometimes called ann{\displaystyle n}-dimensional interval.[citation needed]
Afacetof such an intervalI{\displaystyle I}is the result of replacing any non-degenerate interval factorIk{\displaystyle I_{k}}by a degenerate interval consisting of a finite endpoint ofIk.{\displaystyle I_{k}.}ThefacesofI{\displaystyle I}compriseI{\displaystyle I}itself and all faces of its facets. ThecornersofI{\displaystyle I}are the faces that consist of a single point ofRn.{\displaystyle \mathbb {R} ^{n}.}[citation needed]
Any finite interval can be constructed as theintersectionof half-bounded intervals (with an empty intersection taken to mean the whole real line), and the intersection of any number of half-bounded intervals is a (possibly empty) interval. Generalized ton{\displaystyle n}-dimensionalaffine space, an intersection of half-spaces (of arbitrary orientation) is (the interior of) aconvex polytope, or in the 2-dimensional case aconvex polygon.
An open interval is a connected open set of real numbers. Generalized totopological spacesin general, a non-empty connected open set is called adomain.
Intervals ofcomplex numberscan be defined as regions of thecomplex plane, eitherrectangularorcircular.[12]
The concept of intervals can be defined in arbitrarypartially ordered setsor more generally, in arbitrarypreordered sets. For apreordered set(X,≲){\displaystyle (X,\lesssim )}and two elementsa,b∈X,{\displaystyle a,b\in X,}one similarly defines the intervals[13]: 11, Definition 11
wherex<y{\displaystyle x<y}meansx≲y≴x.{\displaystyle x\lesssim y\not \lesssim x.}Actually, the intervals with single or no endpoints are the same as the intervals with two endpoints in the larger preordered set
defined by adding new smallest and greatest elements (even if there were ones), which are subsets ofX.{\displaystyle X.}In the case ofX=R{\displaystyle X=\mathbb {R} }one may takeR¯{\displaystyle {\bar {\mathbb {R} }}}to be theextended real line.
A subsetA⊆X{\displaystyle A\subseteq X}of thepreordered set(X,≲){\displaystyle (X,\lesssim )}is(order-)convexif for everyx,y∈A{\displaystyle x,y\in A}and everyx≲z≲y{\displaystyle x\lesssim z\lesssim y}we havez∈A.{\displaystyle z\in A.}Unlike in the case of the real line, a convex set of a preordered set need not be an interval. For example, in thetotally ordered set(Q,≤){\displaystyle (\mathbb {Q} ,\leq )}ofrational numbers, the set
is convex, but not an interval ofQ,{\displaystyle \mathbb {Q} ,}since there is no square root of two inQ.{\displaystyle \mathbb {Q} .}
Let(X,≲){\displaystyle (X,\lesssim )}be apreordered setand letY⊆X.{\displaystyle Y\subseteq X.}The convex sets ofX{\displaystyle X}contained inY{\displaystyle Y}form aposetunder inclusion. Amaximal elementof this poset is called aconvex componentofY.{\displaystyle Y.}[14]: Definition 5.1[15]: 727By theZorn lemma, any convex set ofX{\displaystyle X}contained inY{\displaystyle Y}is contained in some convex component ofY,{\displaystyle Y,}but such components need not be unique. In atotally ordered set, such a component is always unique. That is, the convex components of a subset of a totally ordered set form apartition.
A generalization of the characterizations of the real intervals follows. For a non-empty subsetI{\displaystyle I}of alinear continuum(L,≤),{\displaystyle (L,\leq ),}the following conditions are equivalent.[16]: 153, Theorem 24.1
For asubsetS{\displaystyle S}of alatticeL,{\displaystyle L,}the following conditions are equivalent.
EveryTychonoff spaceis embeddable into aproduct spaceof the closed unit intervals[0,1].{\displaystyle [0,1].}Actually, every Tychonoff space that has abaseofcardinalityκ{\displaystyle \kappa }is embeddable into the product[0,1]κ{\displaystyle [0,1]^{\kappa }}ofκ{\displaystyle \kappa }copies of the intervals.[17]: p. 83, Theorem 2.3.23
The concepts of convex sets and convex components are used in a proof that everytotally ordered setendowed with theorder topologyiscompletely normal[15]or moreover,monotonically normal.[14]
Intervals can be associated with points of the plane, and hence regions of intervals can be associated withregionsof the plane. Generally, an interval in mathematics corresponds to an ordered pair(x,y)taken from thedirect productR×R{\displaystyle \mathbb {R} \times \mathbb {R} }of real numbers with itself, where it is often assumed thaty>x. For purposes ofmathematical structure, this restriction is discarded,[18]and "reversed intervals" wherey−x< 0are allowed. Then, the collection of all intervals[x,y]can be identified with thetopological ringformed by thedirect sumofR{\displaystyle \mathbb {R} }with itself, where addition and multiplication are defined component-wise.
The direct sum algebra(R⊕R,+,×){\displaystyle (\mathbb {R} \oplus \mathbb {R} ,+,\times )}has twoideals, { [x,0] :x∈ R } and { [0,y] :y∈ R }. Theidentity elementof this algebra is the condensed interval[1, 1]. If interval[x,y]is not in one of the ideals, then it hasmultiplicative inverse[1/x, 1/y]. Endowed with the usualtopology, the algebra of intervals forms atopological ring. Thegroup of unitsof this ring consists of fourquadrantsdetermined by the axes, or ideals in this case. Theidentity componentof this group is quadrant I.
Every interval can be considered a symmetric interval around itsmidpoint. In a reconfiguration published in 1956 by M Warmus, the axis of "balanced intervals"[x, −x]is used along with the axis of intervals[x,x]that reduce to a point. Instead of the direct sumR⊕R,{\displaystyle R\oplus R,}the ring of intervals has been identified[19]with thehyperbolic numbersby M. Warmus andD. H. Lehmerthrough the identification
wherej2=1.{\displaystyle j^{2}=1.}
This linear mapping of the plane, which amounts of aring isomorphism, provides the plane with a multiplicative structure having some analogies to ordinary complex arithmetic, such aspolar decomposition.
|
https://en.wikipedia.org/wiki/Interval_(mathematics)
|
In mathematics, anelliptic Gauss sumis an analog of aGauss sumdepending on anelliptic curvewith complex multiplication. Thequadratic residuesymbol in a Gauss sum is replaced by a higher residue symbol such as a cubic or quartic residue symbol, and the exponential function in a Gauss sum is replaced by anelliptic function.
They were introduced byEisenstein(1850), at least in the lemniscate case when the elliptic curve has complex multiplication byi, but seem to have been forgotten or ignored until the paper (Pinch 1988).
(Lemmermeyer 2000, 9.3) gives the following example of an elliptic Gauss sum, for the case of an elliptic curve with complex multiplication byi.
where
|
https://en.wikipedia.org/wiki/Elliptic_Gauss_sum
|
Inmachine learning, thepolynomial kernelis akernel functioncommonly used withsupport vector machines(SVMs) and otherkernelizedmodels, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear models.
Intuitively, the polynomial kernel looks not only at the given features of input samples to determine their similarity, but also combinations of these. In the context ofregression analysis, such combinations are known as interaction features. The (implicit) feature space of a polynomial kernel is equivalent to that ofpolynomial regression, but without the combinatorial blowup in the number of parameters to be learned. When the input features are binary-valued (booleans), then the features correspond tological conjunctionsof input features.[1]
For degree-dpolynomials, the polynomial kernel is defined as[2]
wherexandyare vectors of sizenin theinput space, i.e. vectors of features computed from training or test samples andc≥ 0is a free parameter trading off the influence of higher-order versus lower-order terms in the polynomial. Whenc= 0, the kernel is called homogeneous.[3](A further generalized polykernel dividesxTyby a user-specified scalar parametera.[4])
As a kernel,Kcorresponds to an inner product in a feature space based on some mappingφ:
The nature ofφcan be seen from an example. Letd= 2, so we get the special case of the quadratic kernel. After using themultinomial theorem(twice—the outermost application is thebinomial theorem) and regrouping,
From this it follows that the feature map is given by:
generalizing for(xTy+c)d{\displaystyle \left(\mathbf {x} ^{T}\mathbf {y} +c\right)^{d}},
wherex∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}},y∈Rn{\displaystyle \mathbf {y} \in \mathbb {R} ^{n}}and applying themultinomial theorem:
(xTy+c)d=∑j1+j2+⋯+jn+1=dd!j1!⋯jn!jn+1!x1j1⋯xnjncjn+1d!j1!⋯jn!jn+1!y1j1⋯ynjncjn+1=φ(x)Tφ(y){\displaystyle {\begin{alignedat}{2}\left(\mathbf {x} ^{T}\mathbf {y} +c\right)^{d}&=\sum _{j_{1}+j_{2}+\dots +j_{n+1}=d}{\frac {\sqrt {d!}}{\sqrt {j_{1}!\cdots j_{n}!j_{n+1}!}}}x_{1}^{j_{1}}\cdots x_{n}^{j_{n}}{\sqrt {c}}^{j_{n+1}}{\frac {\sqrt {d!}}{\sqrt {j_{1}!\cdots j_{n}!j_{n+1}!}}}y_{1}^{j_{1}}\cdots y_{n}^{j_{n}}{\sqrt {c}}^{j_{n+1}}\\&=\varphi (\mathbf {x} )^{T}\varphi (\mathbf {y} )\end{alignedat}}}
The last summation hasld=(n+dd){\displaystyle l_{d}={\tbinom {n+d}{d}}}elements, so that:
wherel=(j1,j2,...,jn,jn+1){\displaystyle l=(j_{1},j_{2},...,j_{n},j_{n+1})}and
Although theRBF kernelis more popular in SVM classification than the polynomial kernel, the latter is quite popular innatural language processing(NLP).[1][5]The most common degree isd= 2(quadratic), since larger degrees tend tooverfiton NLP problems.
Various ways of computing the polynomial kernel (both exact and approximate) have been devised as alternatives to the usual non-linear SVM training algorithms, including:
One problem with the polynomial kernel is that it may suffer fromnumerical instability: whenxTy+c< 1,K(x,y) = (xTy+c)dtends to zero with increasingd, whereas whenxTy+c> 1,K(x,y)tends to infinity.[4]
|
https://en.wikipedia.org/wiki/Polynomial_kernel
|
Incomputer programming,unspecified behavioris behavior that may vary on different implementations of aprogramming language.[clarification needed]Aprogramcan be said to contain unspecified behavior when itssource codemay produce anexecutablethat exhibits different behavior when compiled on a differentcompiler, or on the same compiler with different settings, or indeed in different parts of the same executable. While the respective language standards or specifications may impose a range of possible behaviors, the exact behavior depends on the implementation and may not be completely determined upon examination of the program's source code.[1]Unspecified behavior will often not manifest itself in the resulting program's external behavior, but it may sometimes lead to differing outputs or results, potentially causingportabilityproblems.
To enable compilers to produce optimal code for their respective target platforms, programming language standards do not always impose a certain specific behavior for a given source code construct.[2]Failing to explicitly define the exact behavior of every possible program is not considered an error or weakness in the language specification, and doing so would be infeasible.[1]In theCandC++languages, such non-portableconstructs are generally grouped into three categories: Implementation-defined, unspecified, andundefined behavior.[3]
The exact definition of unspecified behavior varies. In C++, it is defined as "behavior, for a well-formed program construct and correct data, that depends on the implementation."[4]The C++ Standard also notes that the range of possible behaviors is usually provided.[4]Unlike implementation-defined behavior, there is no requirement for the implementation to document its behavior.[4]Similarly, the C Standard defines it as behavior for which the standard "provides two or more possibilities and imposes no further requirements on which is chosen in any instance".[5]Unspecified behavior is different fromundefined behavior. The latter is typically a result of an erroneous program construct or data, and no requirements are placed on the translation or execution of such constructs.[6]
C and C++ distinguishimplementation-defined behaviorfrom unspecified behavior. For implementation-defined behavior, the implementation must choose a particular behavior and document it. An example in C/C++ is the size of integer data types. The choice of behavior must be consistent with the documented behavior within a given execution of the program.
Many programming languages do not specify the order of evaluation of the sub-expressions of a completeexpression. This non-determinism can allow optimal implementations for specific platforms e.g. to utilise parallelism. If one or more of the sub-expressions hasside effects, then the result of evaluating the full-expression may be different depending on the order of evaluation of the sub-expressions.[1]For example, given
, wherefandgboth modifyb, the result stored inamay be different depending on whetherf(b)org(b)is evaluated first.[1]In the C and C++ languages, this also applies to function arguments. Example:[2]
The resulting program will write its two lines of output in an unspecified order.[2]In some other languages, such asJava, the order of evaluation of operands and function arguments is explicitly defined.[7]
|
https://en.wikipedia.org/wiki/Unspecified_behaviour
|
Inmathematics, asuperparticular ratio, also called asuperparticular numberorepimoric ratio, is theratioof two consecutiveinteger numbers.
More particularly, the ratio takes the form:
Thus:
A superparticular number is when a great number contains a lesser number, to which it is compared, and at the same time one part of it. For example, when 3 and 2 are compared, they contain 2, plus the 3 has another 1, which is half of two. When 3 and 4 are compared, they each contain a 3, and the 4 has another 1, which is a third part of 3. Again, when 5, and 4 are compared, they contain the number 4, and the 5 has another 1, which is the fourth part of the number 4, etc.
Superparticular ratios were written about byNicomachusin his treatiseIntroduction to Arithmetic. Although these numbers have applications in modernpure mathematics, the areas of study that most frequently refer to the superparticular ratios by this name aremusic theory[2]and thehistory of mathematics.[3]
AsLeonhard Eulerobserved, the superparticular numbers (including also the multiply superparticular ratios, numbers formed by adding an integer other than one to aunit fraction) are exactly therational numberswhosesimple continued fractionterminates after two terms. The numbers whose continued fraction terminates in one term are the integers, while the remaining numbers, with three or more terms in their continued fractions, aresuperpartient.[4]
TheWallis product
represents theirrational numberπin several ways as a product of superparticular ratios and theirinverses. It is also possible to convert theLeibniz formula for πinto anEuler productof superparticular ratios in which each term has aprime numberas its numerator and the nearest multiple of four as its denominator:[5]
Ingraph theory, superparticular numbers (or rather, their reciprocals, 1/2, 2/3, 3/4, etc.) arise via theErdős–Stone theoremas the possible values of theupper densityof an infinite graph.[6]
In the study ofharmony, many musicalintervalscan be expressed as a superparticular ratio (for example, due tooctave equivalency, the ninth harmonic, 9/1, may be expressed as a superparticular ratio, 9/8). Indeed, whether a ratio was superparticular was the most important criterion inPtolemy's formulation of musical harmony.[7]In this application,Størmer's theoremcan be used to list all possible superparticular numbers for a givenlimit; that is, all ratios of this type in which both the numerator and denominator aresmooth numbers.[2]
These ratios are also important in visual harmony.Aspect ratiosof 4:3 and 3:2 are common indigital photography,[8]and aspect ratios of 7:6 and 5:4 are used inmedium formatandlarge formatphotography respectively.[9]
Every pair of adjacent positive integers represent a superparticular ratio, and similarly every pair of adjacent harmonics in theharmonic series (music)represent a superparticular ratio. Many individual superparticular ratios have their own names, either in historical mathematics or in music theory. These include the following:
The root of some of these terms comes from Latinsesqui-"one and a half" (fromsemis"a half" and-que"and") describing the ratio 3:2.
|
https://en.wikipedia.org/wiki/Superparticular_ratio
|
Anelectronic lab notebook(also known aselectronic laboratory notebook, orELN) is acomputer programdesigned to replace paperlaboratory notebooks. Lab notebooks in general are used byscientists,engineers, andtechniciansto documentresearch,experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be alegal documentand may be used in acourt of lawasevidence. Similar to aninventor's notebook, the lab notebook is also often referred to inpatentprosecution andintellectual propertylitigation.
Electronic lab notebooks are a fairly new technology and offer many benefits to the user as well as organizations. For example: electronic lab notebooks are easier to search upon, simplify data copying and backups, and support collaboration amongst many users.[1]ELNs can have fine-grained access controls, and can be more secure than their paper counterparts.[2]They also allow the direct incorporation of data from instruments, replacing the practice of printing out data to be stapled into a paper notebook.[3]This is a list of ELN software packages. It is incomplete, as a recent review listed 96 active & 76 inactive (172 total) ELN products.[4]Notably, this review and other lists of ELN software often do not include widely used generic notetaking software likeOnenote,Notion,Jupyteretc, due to their lackELNnominal features like time-stamping and append-only editing. Some ELNs are web-based; others are used on premise and a few are available for both environments.
|
https://en.wikipedia.org/wiki/List_of_ELN_software_packages
|
Innumber theory, thecrankof aninteger partitionis a certain number associated with the partition. It was first introduced without a definition byFreeman Dyson, who hypothesised its existence in a 1944 paper.[1]Dyson gave a list of properties this yet-to-be-defined quantity should have. In 1988,George E. AndrewsandFrank Garvandiscovered a definition for the crank satisfying the properties hypothesized for it by Dyson.[2]
Letnbe a non-negative integer and letp(n) denote the number of partitions ofn(p(0) is defined to be 1).Srinivasa Ramanujanin a paper[3]published in 1918 stated and proved the following congruences for thepartition functionp(n), since known asRamanujan congruences.
These congruences imply that partitions of numbers of the form 5n+ 4 (respectively, of the forms 7n+ 5 and 11n+ 6 ) can be divided into 5 (respectively, 7 and 11) subclasses of equal size. The then known proofs of these congruences were based on the ideas of generating functions and they did not specify a method for the division of the partitions into subclasses of equal size.
In his Eureka paper Dyson proposed the concept of therank of a partition. The rank of a partition is the integer obtained by subtracting the number of parts in the partition from the largest part in the partition. For example, the rank of the partition λ = { 4, 2, 1, 1, 1 } of 9 is 4 − 5 = −1. Denoting byN(m,q,n), the number of partitions ofnwhose ranks are congruent tommoduloq, Dyson consideredN(m, 5, 5n+ 4) andN(m, 7, 7n+ 5) for various values ofnandm. Based on empirical evidences Dyson formulated the following conjectures known asrank conjectures.
For all non-negative integersnwe have:
Assuming that these conjectures are true, they provided a way of splitting up all partitions of numbers of the form 5n+ 4 into five classes of equal size: Put in one class all those partitions whose ranks are congruent to each other modulo 5. The same idea can be applied to divide the partitions of integers of the form 7n+ 5 into seven equally numerous classes. But the idea fails to divide partitions of integers of the form 11n+ 6 into 11 classes of the same size, as the following table shows.
Thus the rank cannot be used to prove the theorem combinatorially. However, Dyson wrote,
I hold in fact :
Whether these guesses are warranted by evidence, I leave it to the reader to decide. Whatever the final verdict of posterity may be, I believe the "crank" is unique among arithmetical functions in having been named before it was discovered. May it be preserved from the ignominious fate of the planetVulcan.
In a paper[2]published in 1988 George E. Andrews and F. G. Garvan defined the crank of a partition as follows:
The cranks of the partitions of the integers 4, 5, 6 are computed in the following tables.
For all integersn≥ 0 and all integersm, the number of partitions ofnwith crank equal tomis denoted byM(m,n) except forn= 1 whereM(−1,1) = −M(0,1) =M(1,1) = 1 as given by the following generating function. The number of partitions ofnwith crank equal tommoduloqis denoted byM(m,q,n).
The generating function forM(m,n) is given below:
Andrews and Garvan proved the following result[2]which shows that the crank as defined above does meet the conditions given by Dyson.
The concepts of rank and crank can both be used to classify partitions of certain integers into subclasses of equal size. However the two concepts produce different subclasses of partitions. This is illustrated in the following two tables.
Recent work byBruce C. Berndtand his coauthors argued that Ramanujan knew about the crank, although not in the form that Andrews and Garvan have defined. In a systematic study of the Lost Notebook of Ramanujan, Berndt and his coauthors have given substantial evidence that Ramanujan knew about the dissections of the crank generating function.[4][5]
|
https://en.wikipedia.org/wiki/Crank_of_a_partition
|
Social televisionis the union oftelevisionandsocial media. Millions of people now share their TV experience with other viewers on social media such asTwitterandFacebookusingsmartphonesand tablets.[1]TV networks and rights holders are increasinglysharing videoclips on social platforms tomonetiseengagement and drive tune-in.
The social TV market covers the technologies that support communication and social interaction around TV as well as companies that study television-relatedsocial behaviorand measure social media activities tied to specific TV broadcasts[2]– many of which have attracted significant investment from established media and technology companies. The market is also seeing numerous tie-ups between broadcasters and social networking players such as Twitter and Facebook. The market is expected to be worth $256bn by 2017.[3]
Social TV was named one of the 10 most important emerging technologies by theMIT Technology Reviewon Social TV[4]in 2010. And in 2011, David Rowan, the editor ofWiredmagazine,[5]named Social TV at number three of six in his peek into 2011 and what tech trends to expect to get traction.Ynon Kreiz, CEO of theEndemolGroup told the audience at theDigital Life Design(DLD) conference in January 2011: "Everyone says that social television will be big. I think it's not going to be big—it's going to be huge".[6]
Much of the investment in the earlier years of social TV went into standalone social TV apps. The industry believed these apps would provide an appealing and complimentary consumer experience which could then be monetized with ads. These apps featured TV listings, check-ins, stickers and synchronised second-screen content but struggled to attract users away from Twitter and Facebook.[7]Most of these companies have since gone out of business or been acquired amid a wave of consolidation[8]and the market has instead focused on the activities of the social media channels themselves – such asTwitter Amplify, Facebook Suggested Videos and Snapchat Discover – and the technologies that support them.
Twitter and Facebook are both helping users connect around media, which can provoke strong debate and engagement. Both social platforms want to be the 'digital watercooler' and host conversation around TV because the engagement and data about what media people consume can then be used to generate advertising revenue.[9]
As an open platform, conversation on Twitter is closely aligned with real-time events. In May 2013, it launchedTwitter Amplify– an advertising product for media and consumer brands.[10]With Amplify, Twitter runs video highlights from major live broadcasts, with advertisers' names and messages playing before the clip.[11]
By February 2014, all four major U.S. TV networks had signed up to the Amplify program, bringing a variety of premium TV content onto the social platform in the form of in-tweet real-time video clips.[12]In June 2014, Twitter acquired itsTwitter Amplifypartner in the U.S. SnappyTV, a company that was helping broadcasters and rights holders to share video content both organically across social and via Twitter's Amplify program. Twitter continues to rely onGrabyo, which has also struck numerous deals with some of the largest broadcasters and rights holders in Europe and North America[13]to share video content across Facebook and Twitter.[14]
Facebookmade significant changes to its platform in 2014 including updates to its algorithm to enhance how it serves video in users' feeds. It also launched video autoplay to get users to watch the videos in their feeds. It rapidly surpassed Twitter and by the end of 2014 it was enjoying three billion video views a day on its platform and had announced a partnership with the NFL, one of Twitter's most active Twitter Amplify partners. In April 2015, at its F8 Developer Conference, it revealed it was working withGrabyoamong other technology partners to bring video onto its platform.[15]Then in July it announced it would be launching Facebook Suggested Videos, bringing related videos and ads to anyone that clicks on a video – a move that not only competed with Twitter's commercial video offering but also put it in direct competition withYouTube.[16]
TV Timeis a television dedicatedsocial networkthat allows users to keep track of the television series they watch, as well as films. It also allows them to express their reaction to the media they have seen with episode specific voting for favorite characters and emotional reaction to episodes, as well as commenting in episode restrictive pages. This way users are able to avoidspoilerswhile also finding a precise audience and community for each of their interactions, as opposed to bigger, non-television dedicated social medias such as Facebook and Twitter where the likelihood of unintentionally reading spoilers is much higher.TV Timeoffers an analytics service called "TVLytics" where the votes and reactions collected from users can be studied for research and television production purposes.[17]
According toBusinessinsider.com, there are variety of applications for social TV, including support for TV ad sales, optimizing TV ad buys, making ad buys more efficient, as a complement to audience measurement, and eventually, audience forecasting and real-time optimization. Social TV data can ease access tofocus groupsand may create apositive feedback loopfor generating ultra-sticky TV programming and multi-screenad campaigns.[18]
Viewers share their TV experience on social media in real-time as events unfold: between 88-100m Facebook users login to the platform during the primetime hours of 8pm – 11pm in the US.[19]The volume of social media engagement in TV is also rising – according to Nielsen SocialGuide, there was a 38% increase in tweets about TV in 2013 to 263m.[20]
For the 2014 Super Bowl, Twitter reported that a record 24.9 million tweets about the game were sent during the telecast, peaking at 381,605 tweets per minute.[21]Facebook reported that 50 million people discussed the Super Bowl, generating 185 million interactions.[22]
The 2014 Oscars generated 5m tweets, viewed by an audience of 37m unique Twitter users and delivering 3.3bn impressions globally as conversation and key moments were shared virally across the platform.[23]
In 2014 the All England Lawn Tennis Club (AELTC), hosts of Wimbledon, usedGrabyoto share video content across social. The videos were viewed 3.5 million times across Facebook and Twitter. In partnered with Grabyo again in 2015 and the videos generated over 48 million views across Facebook and Twitter.[24]
Here are some examples of how TV executives are integrating social elements with TV shows:
|
https://en.wikipedia.org/wiki/Social_television
|
Infer.NETis afree and open source.NETsoftware library formachine learning.[2]It supports runningBayesian inferencein graphical models and can also be used forprobabilistic programming.[3]
Infer.NET follows a model-based approach and is used to solve different kinds of machine learning problems including standard problems like classification, recommendation or clustering, customized solutions and domain-specific problems. The framework is used in various different domains such asbioinformatics,epidemiology,computer vision, andinformation retrieval.[4][5]
Development of the framework was started by a team atMicrosoft’s research centre inCambridge, UKin 2004. It was first released for academic use in 2008 and later open sourced in 2018.[5]In 2013, Microsoft was awarded theUSPTO’sPatents for Humanity Awardin Information Technology category for Infer.NET and the work in advanced machine learning techniques.[6][7]
Infer.NET is used internally at Microsoft as the machine learning engine in some of their products such asOffice,Azure, andXbox.[8]
The source code is licensed underMIT Licenseand available onGitHub.[9]It is also available asNuGetpackage.[10]
|
https://en.wikipedia.org/wiki/Infer.NET
|
Link rot(also calledlink death,link breaking, orreference rot) is the phenomenon ofhyperlinkstending over time to cease to point to their originally targetedfile,web page, orserverdue to that resource being relocated to a new address or becoming permanently unavailable. A link that no longer points to its target may be calledbroken,dead, ororphaned.
The rate of link rot is a subject of study and research due to its significance to theinternet's ability to preserve information. Estimates of that rate vary dramatically between studies. Information professionals have warned that link rot could make important archival data disappear, potentially impacting the legal system and scholarship.
A number of studies have examined the prevalence of link rot within theWorld Wide Web, in academic literature that usesURLsto cite web content, and withindigital libraries.
In a 2023 study of theMillion Dollar Homepageexternal links, it was found that 27% of the links resulted in a site loading with no redirects, 45% of links have been redirected, and 28% returned various error messages.[1]
A 2002 study suggested that link rot within digital libraries is considerably slower than on the web. The article found that about 3% of the objects were no longer accessible after one year,[2]equating to ahalf-lifeof nearly 23 years.
A 2003 study found that on the Web, about one link out of every 200 broke each week,[3]suggesting ahalf-lifeof 138 weeks. This rate was largely confirmed by a 2016–2017 study of links inYahoo! Directory(which had stopped updating in 2014 after 21 years of development) that found the half-life of the directory's links to be two years.[4]
A 2004 study showed that subsets of Web links (such as those targeting specific file types or those hosted by academic institutions) could have dramatically different half-lives.[5]The URLs selected for publication appear to have greater longevity than the average URL. A 2015 study by Weblock analyzed more than 180,000 links from references in the full-text corpora of three major open access publishers and found a half-life of about 14 years,[6]generally confirming a 2005 study that found that half of theURLscited inD-Lib Magazinearticles were active 10 years after publication.[7]Other studies have found higher rates of link rot in academic literature but typically suggest a half-life of four years or greater.[8][9]A 2013 study inBMC Bioinformaticsanalyzed nearly 15,000 links in abstracts from Thomson Reuters'sWeb of Sciencecitation index and found that the median lifespan of web pages was 9.3 years, and just 62% were archived.[10]A 2021 study of external links inNew York Timesarticles published between 1996 and 2019 found a half-life of about 15 years (with significant variance among content topics) but noted that 13% of functional links no longer lead to the original content—a phenomenon calledcontent drift.[11]
A 2013 study found that 49% of links in U.S. Supreme court opinions are dead.[12]
A 2023 study looking at United StatesCOVID-19dashboards found that 23% of the state dashboards available in February 2021 were no longer available at the previous URLs in April 2023.[13]
Pew Researchfound that, in 2023, 38% of pages from 2013 went missing. Also, in 2023, 54% ofEnglish Wikipediaarticles had a dead link in the 'references' section and 23% ofnews articleslinked to a dead URL.[14]
Link rot can result for several reasons. A target web page may be removed. The server that hosts the target page could fail, be removed from service, or relocate to a newdomain name. As far back as 1999, it was noted that with the amount of material that can be stored on a hard drive, "a single disk failure could be like the burning of thelibrary at Alexandria."[15]A domain name's registration may lapse or be transferred to another party. Some causes will result in the link failing to find any target and returning an error such asHTTP 404. Other causes will cause a link to target content other than what was intended by the link's author.
Other reasons for broken links include:
Strategies for preventing link rot can focus on placing content where its likelihood of persisting is higher, authoring links that are less likely to be broken, taking steps to preserve existing links, or repairing links whose targets have been relocated or removed.[citation needed]
The creation of URLs that will not change with time is the fundamental method of preventing link rot. Preventive planning has been championed byTim Berners-Leeand other web pioneers.[16]
Strategies pertaining to the authorship of links include:
Strategies pertaining to the protection of existing links include:
The detection of broken links may be done manually or automatically. Automated methods includeplug-insforcontent management systemsas well as standalone broken-link checkers such as likeXenu's Link Sleuth. Automatic checking may not detect links that return asoft 404or links that return a200 OKresponse but point to content that has changed.[26]
|
https://en.wikipedia.org/wiki/Link_rot
|
Behavior-based robotics(BBR) orbehavioral roboticsis an approach inroboticsthat focuses on robots that are able to exhibit complex-appearing behaviors despite little internalvariable stateto model its immediate environment, mostly gradually correcting its actions via sensory-motor links.
Behavior-based robotics sets itself apart from traditional artificial intelligence by using biological systems as a model. Classicartificial intelligencetypically uses a set of steps to solve problems, it follows a path based on internal representations of events compared to the behavior-based approach. Rather than use preset calculations to tackle a situation, behavior-based robotics relies on adaptability. This advancement has allowed behavior-based robotics to become commonplace in researching and data gathering.[1]
Most behavior-based systems are alsoreactive, which means they need no programming of what a chair looks like, or what kind of surface the robot is moving on. Instead, all the information is gleaned from the input of the robot's sensors. The robot uses that information to gradually correct its actions according to the changes in immediate environment.
Behavior-based robots (BBR) usually show more biological-appearing actions than theircomputing-intensive counterparts, which are very deliberate in their actions. A BBR often makes mistakes, repeats actions, and appears confused, but can also show the anthropomorphic quality of tenacity. Comparisons between BBRs andinsectsare frequent because of these actions. BBRs are sometimes considered examples ofweak artificial intelligence, although some have claimed they are models of all intelligence.[2]
Most behavior-based robots are programmed with a basic set of features to start them off. They are given a behavioral repertoire to work with dictating what behaviors to use and when, obstacle avoidance and battery charging can provide a foundation to help the robots learn and succeed. Rather than build world models, behavior-based robots simply react to their environment and problems within that environment. They draw upon internal knowledge learned from their past experiences combined with their basic behaviors to resolve problems.[1][3]
The school of behavior-based robots owes much to work undertaken in the 1980s at theMassachusetts Institute of TechnologybyRodney Brooks, who with students and colleagues built a series of wheeled and legged robots utilizing thesubsumption architecture. Brooks' papers, often written with lighthearted titles such as "Planning is just a way of avoiding figuring out what to do next", theanthropomorphicqualities of his robots, and the relatively low cost of developing such robots, popularized the behavior-based approach.
Brooks' work builds—whether by accident or not—on two prior milestones in the behavior-based approach. In the 1950s,W. Grey Walter, an English scientist with a background inneurologicalresearch, built a pair ofvacuum tube-based robots that were exhibited at the 1951Festival of Britain, and which have simple but effective behavior-based control systems.
The second milestone isValentino Braitenberg's1984 book, "Vehicles – Experiments in Synthetic Psychology" (MIT Press). He describes a series of thought experiments demonstrating how simply wired sensor/motor connections can result in some complex-appearing behaviors such as fear and love.
Later work in BBR is from theBEAM roboticscommunity, which has built upon the work ofMark Tilden. Tilden was inspired by the reduction in the computational power needed for walking mechanisms from Brooks' experiments (which used onemicrocontrollerfor each leg), and further reduced the computational requirements to that oflogicchips,transistor-basedelectronics, and analogcircuitdesign.
A different direction of development includes extensions of behavior-based robotics to multi-robot teams.[4]The focus in this work is on developing simple generic mechanisms that result in coordinated group behavior, either implicitly or explicitly.
|
https://en.wikipedia.org/wiki/Behavior_based_robotics
|
Instatistics, themean squared error(MSE)[1]ormean squared deviation(MSD) of anestimator(of a procedure for estimating an unobserved quantity) measures theaverageof the squares of theerrors—that is, the average squared difference between the estimated values and thetrue value. MSE is arisk function, corresponding to theexpected valueof thesquared error loss.[2]The fact that MSE is almost always strictly positive (and not zero) is because ofrandomnessor because the estimatordoes not account for informationthat could produce a more accurate estimate.[3]Inmachine learning, specificallyempirical risk minimization, MSE may refer to theempiricalrisk (the average loss on an observed data set), as an estimate of the true MSE (the true risk: the average loss on the actual population distribution).
The MSE is a measure of the quality of an estimator. As it is derived from the square ofEuclidean distance, it is always a positive value that decreases as the error approaches zero.
The MSE is the secondmoment(about the origin) of the error, and thus incorporates both thevarianceof the estimator (how widely spread the estimates are from onedata sampleto another) and itsbias(how far off the average estimated value is from the true value).[citation needed]For anunbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated. In an analogy tostandard deviation, taking the square root of MSE yields theroot-mean-square errororroot-mean-square deviation(RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of thevariance, known as thestandard error.
The MSE either assesses the quality of apredictor(i.e., a function mapping arbitrary inputs to a sample of values of somerandom variable), or of anestimator(i.e., amathematical functionmapping asampleof data to an estimate of aparameterof thepopulationfrom which the data is sampled). In the context of prediction, understanding theprediction intervalcan also be useful as it provides a range within which a future observation will fall, with a certain probability. The definition of an MSE differs according to whether one is describing a predictor or an estimator.
If a vector ofn{\displaystyle n}predictions is generated from a sample ofn{\displaystyle n}data points on all variables, andY{\displaystyle Y}is the vector of observed values of the variable being predicted, withY^{\displaystyle {\hat {Y}}}being the predicted values (e.g. as from aleast-squares fit), then the within-sample MSE of the predictor is computed as
In other words, the MSE is themean(1n∑i=1n){\textstyle \left({\frac {1}{n}}\sum _{i=1}^{n}\right)}of thesquares of the errors(Yi−Yi^)2{\textstyle \left(Y_{i}-{\hat {Y_{i}}}\right)^{2}}. This is an easily computable quantity for a particular sample (and hence is sample-dependent).
Inmatrixnotation,
whereei{\displaystyle e_{i}}is(Yi−Yi^){\displaystyle (Y_{i}-{\hat {Y_{i}}})}ande{\displaystyle \mathbf {e} }is an×1{\displaystyle n\times 1}column vector.
The MSE can also be computed onqdata points that were not used in estimating the model, either because they were held back for this purpose, or because these data have been newly obtained. Within this process, known ascross-validation, the MSE is often called thetest MSE,[4]and is computed as
The MSE of an estimatorθ^{\displaystyle {\hat {\theta }}}with respect to an unknown parameterθ{\displaystyle \theta }is defined as[1]
This definition depends on the unknown parameter, therefore the MSE is apriori propertyof an estimator. The MSE could be a function of unknown parameters, in which case anyestimatorof the MSE based on estimates of these parameters would be a function of the data (and thus a random variable). If the estimatorθ^{\displaystyle {\hat {\theta }}}is derived as a sample statistic and is used to estimate some population parameter, then the expectation is with respect to thesampling distributionof the sample statistic.
The MSE can be written as the sum of thevarianceof the estimator and the squaredbiasof the estimator, providing a useful way to calculate the MSE and implying that in the case of unbiased estimators, the MSE and variance are equivalent.[5]
MSE(θ^)=Eθ[(θ^−θ)2]=Eθ[(θ^−Eθ[θ^]+Eθ[θ^]−θ)2]=Eθ[(θ^−Eθ[θ^])2+2(θ^−Eθ[θ^])(Eθ[θ^]−θ)+(Eθ[θ^]−θ)2]=Eθ[(θ^−Eθ[θ^])2]+Eθ[2(θ^−Eθ[θ^])(Eθ[θ^]−θ)]+Eθ[(Eθ[θ^]−θ)2]=Eθ[(θ^−Eθ[θ^])2]+2(Eθ[θ^]−θ)Eθ[θ^−Eθ[θ^]]+(Eθ[θ^]−θ)2Eθ[θ^]−θ=constant=Eθ[(θ^−Eθ[θ^])2]+2(Eθ[θ^]−θ)(Eθ[θ^]−Eθ[θ^])+(Eθ[θ^]−θ)2Eθ[θ^]=constant=Eθ[(θ^−Eθ[θ^])2]+(Eθ[θ^]−θ)2=Varθ(θ^)+Biasθ(θ^,θ)2{\displaystyle {\begin{aligned}\operatorname {MSE} ({\hat {\theta }})&=\operatorname {E} _{\theta }\left[({\hat {\theta }}-\theta )^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]+\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}+2\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+\operatorname {E} _{\theta }\left[2\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)\right]+\operatorname {E} _{\theta }\left[\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\right]\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+2\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)\operatorname {E} _{\theta }\left[{\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right]+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}&&\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta ={\text{constant}}\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+2\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}&&\operatorname {E} _{\theta }[{\hat {\theta }}]={\text{constant}}\\&=\operatorname {E} _{\theta }\left[\left({\hat {\theta }}-\operatorname {E} _{\theta }[{\hat {\theta }}]\right)^{2}\right]+\left(\operatorname {E} _{\theta }[{\hat {\theta }}]-\theta \right)^{2}\\&=\operatorname {Var} _{\theta }({\hat {\theta }})+\operatorname {Bias} _{\theta }({\hat {\theta }},\theta )^{2}\end{aligned}}}
An even shorter proof can be achieved using the well-known formula that for a random variableX{\textstyle X},E(X2)=Var(X)+(E(X))2{\textstyle \mathbb {E} (X^{2})=\operatorname {Var} (X)+(\mathbb {E} (X))^{2}}. By substitutingX{\textstyle X}with,θ^−θ{\textstyle {\hat {\theta }}-\theta }, we have
But in real modeling case, MSE could be described as the addition of model variance, model bias, and irreducible uncertainty (seeBias–variance tradeoff). According to the relationship, the MSE of the estimators could be simply used for theefficiencycomparison, which includes the information of estimator variance and bias. This is called MSE criterion.
Inregression analysis, plotting is a more natural way to view the overall trend of the whole data. The mean of the distance from each point to the predicted regression model can be calculated, and shown as the mean squared error. The squaring is critical to reduce the complexity with negative signs. To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is theleast squares method—which evaluates appropriateness of linear regression model to modelbivariate dataset,[6]but whose limitation is related to known distribution of the data.
The termmean squared erroris sometimes used to refer to the unbiased estimate of error variance: theresidual sum of squaresdivided by the number ofdegrees of freedom. This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor, in that a different denominator is used. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n−p) forpregressorsor (n−p−1) if an intercept is used (seeerrors and residuals in statisticsfor more details).[7]Although the MSE (as defined in this article) is not an unbiased estimator of the error variance, it isconsistent, given the consistency of the predictor.
In regression analysis, "mean squared error", often referred to asmean squared prediction erroror "out-of-sample mean squared error", can also refer to the mean value of thesquared deviationsof the predictions from the true values, over an out-of-sampletest space, generated by a model estimated over aparticular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space.
In the context ofgradient descentalgorithms, it is common to introduce a factor of1/2{\displaystyle 1/2}to the MSE for ease of computation after taking the derivative. So a value which is technically half the mean of squared errors may be called the MSE.
Suppose we have a random sample of sizen{\displaystyle n}from a population,X1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}. Suppose the sample units were chosenwith replacement. That is, then{\displaystyle n}units are selected one at a time, and previously selected units are still eligible for selection for alln{\displaystyle n}draws. The usual estimator for the population meanμ{\displaystyle \mu }is the sample average
which has an expected value equal to the true meanμ{\displaystyle \mu }(so it is unbiased) and a mean squared error of
whereσ2{\displaystyle \sigma ^{2}}is thepopulation variance.
For aGaussian distributionthis is thebest unbiased estimatorof the population mean, that is the one with the lowest MSE (and hence variance) among all unbiased estimators. One can check that the MSE above equals the inverse of theFisher information(seeCramér–Rao bound). But the same sample mean is not the best estimator of the population mean, say, for auniform distribution.
The usual estimator for the variance is thecorrectedsample variance:
This is unbiased (its expected value isσ2{\displaystyle \sigma ^{2}}), hence also called theunbiased sample variance,and its MSE is[8]
whereμ4{\displaystyle \mu _{4}}is the fourthcentral momentof the distribution or population, andγ2=μ4/σ4−3{\displaystyle \gamma _{2}=\mu _{4}/\sigma ^{4}-3}is theexcess kurtosis.
However, one can use other estimators forσ2{\displaystyle \sigma ^{2}}which are proportional toSn−12{\displaystyle S_{n-1}^{2}}, and an appropriate choice can always give a lower mean squared error. If we define
then we calculate:
This is minimized when
For aGaussian distribution, whereγ2=0{\displaystyle \gamma _{2}=0}, this means that the MSE is minimized when dividing the sum bya=n+1{\displaystyle a=n+1}. The minimum excess kurtosis isγ2=−2{\displaystyle \gamma _{2}=-2},[a]which is achieved by aBernoulli distributionwithp= 1/2 (a coin flip), and the MSE is minimized fora=n−1+2n.{\displaystyle a=n-1+{\tfrac {2}{n}}.}Hence regardless of the kurtosis, we get a "better" estimate (in the sense of having a lower MSE) by scaling down the unbiased estimator a little bit; this is a simple example of ashrinkage estimator: one "shrinks" the estimator towards zero (scales down the unbiased estimator).
Further, while the corrected sample variance is thebest unbiased estimator(minimum mean squared error among unbiased estimators) of variance for Gaussian distributions, if the distribution is not Gaussian, then even among unbiased estimators, the best unbiased estimator of the variance may not beSn−12.{\displaystyle S_{n-1}^{2}.}
The following table gives several estimators of the true parameters of the population, μ and σ2, for the Gaussian case.[9]
An MSE of zero, meaning that the estimatorθ^{\displaystyle {\hat {\theta }}}predicts observations of the parameterθ{\displaystyle \theta }with perfect accuracy, is ideal (but typically not possible).
Values of MSE may be used for comparative purposes. Two or morestatistical modelsmay be compared using their MSEs—as a measure of how well they explain a given set of observations: An unbiased estimator (estimated from a statistical model) with the smallest variance among all unbiased estimators is thebest unbiased estimatoror MVUE (Minimum-Variance Unbiased Estimator).
Bothanalysis of varianceandlinear regressiontechniques estimate the MSE as part of the analysis and use the estimated MSE to determine thestatistical significanceof the factors or predictors under study. The goal ofexperimental designis to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at least one of the estimated treatment effects.
Inone-way analysis of variance, MSE can be calculated by the division of the sum of squared errors and the degree of freedom. Also, the f-value is the ratio of the mean squared treatment and the MSE.
MSE is also used in severalstepwise regressiontechniques as part of the determination as to how many predictors from a candidate set to include in a model for a given set of observations.
Minimizing MSE is a key criterion in selecting estimators; seeminimum mean-square error. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is theminimum variance unbiased estimator. However, a biased estimator may have lower MSE; seeestimator bias.
Instatistical modellingthe MSE can represent the difference between the actual observations and the observation values predicted by the model. In this context, it is used to determine the extent to which the model fits the data as well as whether removing some explanatory variables is possible without significantly harming the model's predictive ability.
Inforecastingandprediction, theBrier scoreis a measure offorecast skillbased on MSE.
Squared error loss is one of the most widely usedloss functionsin statistics, though its widespread use stems more from mathematical convenience than considerations of actual loss in applications.Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[3]The mathematical benefits of mean squared error are particularly evident in its use at analyzing the performance oflinear regression, as it allows one to partition the variation in a dataset into variation explained by the model and variation explained by randomness.
The use of mean squared error without question has been criticized by thedecision theoristJames Berger. Mean squared error is the negative of the expected value of one specificutility function, the quadratic utility function, which may not be the appropriate utility function to use under a given set of circumstances. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.[10]
Likevariance, mean squared error has the disadvantage of heavily weightingoutliers.[11]This is a result of the squaring of each term, which effectively weights large errors more heavily than small ones. This property, undesirable in many applications, has led researchers to use alternatives such as themean absolute error, or those based on themedian.
|
https://en.wikipedia.org/wiki/Mean_squared_error
|
KNXis anopen standard(seeEN 50090,ISO/IEC14543) for commercial and residentialbuilding automation. KNX devices can manage lighting, blinds and shutters,HVAC, security systems, energy management, audio video, domestic appliances, displays, remote control, etc. KNX evolved from three earlier standards; theEuropean Home Systems Protocol(EHS),BatiBUS, and theEuropean Installation Bus(EIB orInstabus).
It can usetwisted pair(in atree, line orstartopology),powerline,RF, orIPlinks. On this network, the devices formdistributed applicationsand tight interaction is possible. This is implemented via interworking models with standardised datapoint types andobjects, modellinglogicaldevice channels.
The KNX standard has been built on theOSI-basedEIBcommunication stackextended with thephysical layers, configuration modes and application experience ofBatiBUSandEHS.
KNX installations can use several physical communication media:
KNX is not based on a specific hardware platform and a network can be controlled by anything from an 8-bitmicrocontrollerto a PC, according to the demands of a particular building. The most common form of installation is over twisted pair medium.
KNX is an approved standard by the following organisations, (inter alia):[1]
It is administered by the KNX Associationcvba, a non-profit organisation governed by Belgian law which was formed in 1999. The KNX Association had 500 registered hardware and software vendor members from 45 nations as at 1 July 2021. It had partnership agreements with 100,000 installer companies in 172 countries and more than 500 registered training centres.[2]This is a royalty-freeopen standardand thus access to the KNX specifications is unrestricted.[3]
KNX devices are commonly connected by a twisted pair bus and can be modified from a controller. The bus is routed in parallel to the electrical power supply to all devices and systems on the network linking:[4]
Classifying devices as either "sensor" or "actuator" is outdated and simplistic. Many actuators include controller functionality, but also sensor functionality (for instance measuring operating hours, number of switch cycles, current, electrical power consumption, and more).
Application software, together with system topology and commissioning software, is loaded onto the devices via a system interface component. Installed systems can be accessed via LAN, point to point links, or phone networks for central or distributed control of the system via computers, tablets and touch screens, and smartphones.
The key features of the KNX architecture are:
Central to the KNX architecture concepts aredatapoints(inputs, outputs, parameters, and diagnostic data) which represent process and control variables in the system. The standardised containers for these datapoints aregroup objectsandinterface object properties. The communication system offers a reduced instruction set to read and write datapoint values. Datapoints have to conform to standardiseddatapoint types, themselves grouped intofunctional blocks. These functional blocks and datapoint types are related to applications fields, but some of them are of general use (such as date and time). Datapoints may be accessed through unicast or multicast mechanisms.
To logically link applications' datapoints across the network, KNX has three underlying binding schemes: one for free, one for structured and one for tagged binding:
The common kernel sits on top of the physical layers and the medium-specific data link layer and is shared by all the devices on the KNX Network. It is OSI 7-layer model compliant:
An installation has to be configured at the network topology level and at individual nodes or devices. The first level is a precondition or “bootstrap” phase, prior to the configuration of the distributed applications, i.e. binding and parameter setting. Configuration may be achieved through a combination of local activity on the devices (such pushing a button), and active network management communication over the bus (peer-to-peer, or more centralized master-slave).
The KNX configuration mode:
Some modes require more active management over the bus, whereas some others are mainly oriented towards local configuration. There are three categories of KNX devices:
KNX encompasses tools for project engineering tasks such as linking a series of individual devices into a functioning installation and integrating different media and configuration modes. This is embodied in anEngineering Tool Software(ETS) suite.
A KNX installation always consists of a set of devices connected to the bus or network. Device models vary according to node roles, capabilities, management features and configuration modes, and are all laid down in theprofiles. There are also general-purpose device models, such as for bus coupling units (BCUs) or bus interface modules (BIMs).
Devices may be identified and subsequently accessed throughout the network either by their individual address, or by their unique serial number, depending on the configuration mode. (Unique serial numbers are allocated by the KNX Association Certification Department.) Devices can also disclose both a manufacturer specific reference and functional (manufacturer independent) information when queried.
A KNX wired network can be formed withtree,lineandstartopologies, which can be mixed as needed;ringtopologies arenotsupported. A tree topology is recommended for a large installation.
KNX can link up to 57,375 devices using16-bitaddresses.
Coupling units allow address filtering which helps to improve performance given the limited bus signal speed. An installation based on KNXnet/IP allows the integration of KNX sub networks via IP as the KNX address structure is similar to an IP address.
The TP1twisted pairbus (inherited from EIB) providesasynchronous, character oriented data transfer andhalf-duplexbidirectionaldifferential signalingwith a signaling speed of 9600 bit/s.Media access controlis viaCSMA/CA. Every bus user has equal data transmission rights and data is exchanged directly (peer-to-peer) between bus users.SELVpower is distributed via the same pair for low-power devices. A deprecated specification, TP0, running at a slower signalling speed of4800 bit/s, has been retained from the BatiBUS standard but KNX products cannot exchange information with BatiBUS devices.
PL 110 power-line transmission is delivered usingspread frequency shift keyingsignalling with asynchronous transmission of data packets and half duplex bi-directional communication. It uses the central frequency 110 kHz (CENELEC B-band) and has a data rate of 1200 bit/s. It also uses CSMA. KNX Powerline is aimed at smartwhite goods, but the take-up has been low. An alternative variant, PL 132, has a carrier frequency centred on 132.5 kHz (CENELEC C-band).
RF enables communication in the 868.3 MHz band for usingfrequency shift keyingwithManchester data encoding.
KNXnet/IP port 3671 has integration solutions forIP-enabled media likeEthernet(IEEE 802.2),Bluetooth,WiFi/Wireless LAN(IEEE 802.11),FireWire(IEEE 1394) etc.
Ignoring any preamble for medium-specific access and collision control, a frame format is generally:
KNX Telegrams can be signed or encrypted thanks to the extension of the protocol that was developed starting in 2013, KNX Data Secure for securing telegrams on the traditional KNX media TP and RF and KNX IP Secure for securing KNX telegrams tunnelled via IP. KNX Data Secure became an EN standard (EN 50090-3-4) in 2018, KNX IP Secure an ISO standard (ISO 22510) in 2019.
Any product labeled with the KNX trademark must be certified to conform with the standards (and thus interoperable with other devices) by accredited third party test labs. All products bearing the KNX logo are programmed through a common interface using the vendor-independent ETS software.
|
https://en.wikipedia.org/wiki/KNX
|
Sparse matrix–vector multiplication(SpMV) of the formy=Axis a widely usedcomputational kernelexisting in many scientific applications. The input matrixAissparse. The input vectorxand the output vectoryare dense. In the case of a repeatedy=Axoperation involving the same input matrixAbut possibly changing numerical values of its elements,Acan be preprocessed to reduce both the parallel and sequential run time of the SpMV kernel.[1]
This article aboutmatricesis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Sparse_matrix%E2%80%93vector_multiplication
|
Apersonal data manager(PDM) is a portable hardware tool enabling secure storage and easy access to user data.[1]It can also be an application located on a portablesmart deviceor PC, enabling novice end-users to directly define, classify, and manipulate a universe of information objects.[1]Usually PDMs includepassword managementsoftware,web-browserfavorites andcryptographic software.
Advanced PDM can also store settings forVPNandTerminal Services, address books, and other features. PDM can also store and launch several portable software applications.
Companies such as Salmon Technologies and theirSalmonPDMapplication have been innovative in creating personalized directory structures to aid/prompt individuals where to store key typical pieces of information, such as legal documents, education/schooling information, medical information, property/vehicle bills, service contracts, and more. The process of creating directory structures that map to individual/family unit types, such as Child, Adult, Couple, Family with Children/Dependents is referred to as Personal Directory Modeling.
TheDatabox Projectis academia-based research into developing "an open-source personal networked device, augmented by cloud-hosted services, that collates, curates, and mediates access to an individual’s personal data by verified and audited third party applications and services."[2]
Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Personal_data_manager
|
Ananti-keylogger(oranti–keystroke logger) is a type of software specifically designed for the detection ofkeystroke loggersoftware; often, such software will also incorporate the ability to delete or at least immobilize hidden keystroke logger software on a computer. In comparison to mostanti-virusoranti-spywaresoftware, the primary difference is that an anti-keylogger does not make a distinction between alegitimatekeystroke-logging program and anillegitimatekeystroke-logging program (such asmalware); all keystroke-logging programs are flagged and optionally removed, whether they appear to be legitimate keystroke-logging software or not. The anti-keylogger is efficient in managing malicious users. It can detect the keyloggers and terminate them from the system.[1]
Keyloggers are sometimes part of malware packages downloaded onto computers without the owners' knowledge. Detecting the presence of a keylogger on a computer can be difficult. So-called anti- keylogging programs have been developed to thwart keylogging systems, and these are often effective when used properly.
Anti-keyloggers are used both by large organizations as well as individuals in order to scan for and remove (or in some cases simply immobilize)keystroke loggingsoftware on a computer. It is generally advised the software developers that anti-keylogging scans be run on a regular basis in order to reduce the amount of time during which a keylogger may record keystrokes. For example, if a system is scanned once every three days, there is a maximum of only three days during which a keylogger could be hidden on the system and recording keystrokes.
Public computersare extremely susceptible to the installation ofkeystroke loggingsoftware and hardware, and there are documented instances of this occurring.[2]Public computers are particularly susceptible to keyloggers because any number of people can gain access to the machine and install both ahardware keyloggerand a software keylogger, either or both of which can be secretly installed in a matter of minutes.[3]Anti-keyloggers are often used on a daily basis to ensure that public computers are not infected with keyloggers, and are safe for public use.
Keyloggers have been prevalent in the online gaming industry, being used to secretly record a gamer's access credentials, user name and password, when logging into an account; this information is sent back to the hacker. The hacker can sign on later to the account and change the password to the account, thus stealing it.
World of Warcrafthas been of particular importance to game hackers and has been the target of numerous keylogging viruses. Anti-keyloggers are used by manyWorld of Warcraftand other gaming community members in order to try to keep their gaming accounts secure.
Financial institutionshave become the target of keyloggers,[4]particularly those institutions which do not use advanced security features such asPINpads or screen keyboards.[5]Anti-keyloggers are used to run regular scans of any computer on which banking or client information is accessed, protecting passwords, banking information, and credit card numbers from identity thieves.
The most common use of an anti-keylogger is by individuals wishing to protect their privacy while using their computer; uses range from protecting financial information used in online banking, any passwords, personal communication, and virtually any other information which may be typed into a computer. Keyloggers are often installed by people known by the computer's owner, and many times have been installed by an ex-partner hoping to spy on their ex-partner's activities, particularly chat.[6]
This type of software has a signature base, that is strategic information that helps to uniquely identify a keylogger, and the list contains as many known keyloggers as possible. Some vendors make some effort or availability of an up-to-date listing for download by customers. Each time a 'System Scan' is run, this software compares the contents of the hard disk drive, item by item, against the list, looking for any matches.
This type of software is a rather widespread one, but it has its own drawbacks The biggest drawback of signature-based anti-keyloggers is that one can only be protected from keyloggers found on the signature-base list, thus staying vulnerable to unknown or unrecognized keyloggers. A criminal can download one of many famous keyloggers, change it just enough, and the anti-keylogger won't recognize it.
This software doesn't use signature bases, it uses a checklist of known features, attributes, and methods that keyloggers are known to use.
It analyzes the methods of work of all the modules in a PC, thus blocking the activity of any module that is similar to the work of keyloggers. Though this method gives better keylogging protection than signature-based anti-keyloggers, it has its own drawbacks. One of them is that this type of software blocks non-keyloggers also. Several 'non-harmful' software modules, either part of the operating system or part of legitimate apps, use processes which keyloggers also use, which can trigger afalse positive. Usually all the non signature-based keyloggers have the option to allow the user to unblock selected modules, but this can cause difficulties for inexperienced users who are unable to discern good modules from bad modules when manually choosing to block or unblock.
|
https://en.wikipedia.org/wiki/Anti-keylogger
|
Noiseletsare functions which gives the worst case behavior for theHaar waveletpacket analysis. In other words, noiselets are totally incompressible by the Haar wavelet packet analysis.[1]Like the canonical and Fourier bases, which have an incoherent property, noiselets are perfectly incoherent with the Haar basis. In addition, they have a fast algorithm for implementation, making them useful as a sampling basis for signals that are sparse in the Haar domain.
The mother bases functionχ(x){\displaystyle \chi (x)}is defined as:
χ(x)={1x∈[0,1)0otherwise{\displaystyle \chi (x)={\begin{cases}1&x\in [0,1)\\0&{\text{otherwise}}\end{cases}}}
The family of noislets is constructed recursively as follows:
f1(x)=χ(x)f2n(x)=(1−i)fn(2x)+(1+i)fn(2x−1)f2n+1(x)=(1+i)fn(2x)+(1−i)fn(2x−1){\displaystyle {\begin{alignedat}{2}f_{1}(x)&=\chi (x)\\f_{2n}(x)&=(1-i)f_{n}(2x)+(1+i)f_{n}(2x-1)\\f_{2n+1}(x)&=(1+i)f_{n}(2x)+(1-i)f_{n}(2x-1)\end{alignedat}}}
Source:[2]
Noiselet can be extended and discretized. The extended functionfm(k,l){\displaystyle f_{m}(k,l)}is defined as follows:
fm(1,l)={1l=0,…,2m−10otherwisefm(2k,l)=(1−i)fm(k,2l)+(1+i)fm(k,2l−2m)fm(2k+1,l)=(1+i)fm(k,2l)+(1−i)fm(k,2l−2m){\displaystyle {\begin{alignedat}{2}f_{m}(1,l)&={\begin{cases}1&l=0,\dots ,2^{m}-1\\0&{\text{otherwise}}\end{cases}}\\f_{m}(2k,l)&=(1-i)f_{m}(k,2l)+(1+i)f_{m}(k,2l-2^{m})\\f_{m}(2k+1,l)&=(1+i)f_{m}(k,2l)+(1-i)f_{m}(k,2l-2^{m})\\\end{alignedat}}}
Use extended noiseletfm(k,l){\displaystyle f_{m}(k,l)}, we can generate then×n{\displaystyle n\times n}noiselet matrixNn{\displaystyle N_{n}}, where n is a power of twon=2q{\displaystyle n=2^{q}}:
N1=[1]N2n=12[1−i1+i1+i1−i]⊗Nn{\displaystyle {\begin{alignedat}{2}N_{1}&=[1]\\N_{2n}&={\frac {1}{2}}{\begin{bmatrix}1-i&1+i\\1+i&1-i\end{bmatrix}}\otimes N_{n}\\\end{alignedat}}}
Here⊗{\displaystyle \otimes }denotes the Kronecker product.
Suppose2m>n{\displaystyle 2^{m}>n}, we can find thatNn(k,l){\displaystyle N_{n}(k,l)}is equalfm(n+k,2mnl){\displaystyle f_{m}(n+k,{\frac {2^{m}}{n}}l)}.
The elements of the noiselet matrices take discrete values from one of two four-element sets:
nNn(j,k)∈{1,−1,i,−i}for evenq2nNn(j,k)∈{1+i,1−i,−1+i,−1−i}for oddq{\displaystyle {\begin{alignedat}{3}{\sqrt {n}}N_{n}(j,k)&\in \{1,-1,i,-i\}&{\text{for even }}q\\{\sqrt {2n}}N_{n}(j,k)&\in \{1+i,1-i,-1+i,-1-i\}&{\text{for odd }}q\\\end{alignedat}}}
2D noiselet transforms are obtained through the Kronecker product of 1D noiselet transform:
Nn×k2D=Nk⊗Nn{\displaystyle N_{n\times k}^{2D}=N_{k}\otimes N_{n}}
Noiselet has some properties that make them ideal for applications:
The complementarity of wavelets and noiselets means that noiselets can be used incompressed sensingto reconstruct a signal (such as an image) which has a compact representation in wavelets.[3]MRIdata can be acquired in noiselet domain, and, subsequently, images can be reconstructed from undersampled data using compressive-sensing reconstruction.[4]
Here are few applications that noiselet has been implemented:
The noiselet encoding is a technique used in MRI to acquire images with reduced acquisition time. In MRI, the imaging process typically involves encoding spatial information using gradients. Traditional MRI acquisition relies on Cartesian encoding,[5]where the spatial information is sampled on a Cartesian grid. However, this methodology could be time consuming, especially in images with high resolution or dynamic imaging.
While noiselet encoding is part of thecompressive sensing. It exploits the sparsity of images to obtain them in a more efficient way. In compressive sensing, the idea is to acquire fewer samples than dictated by the Nyquist-Shannon sampling theorem, under the assumption that the underlying signal or image is sparse in some domain. The overview of how noiselet encoding works in MRI is briefly explained as follow:
The noiselet encoding uses a noiselet transform matrix, which the produced coefficients effectively disperse the signal across both scale and time. Consequently, each subset of these transform coefficients captures specific information from the original signal. When these subsets are utilized independently with zero padding, each of them can be employed to reconstruct the original signal at a reduced resolution. As not all of the spatial frequency components are sampled by noiselet encoding, the undersampling allows the reconstruction of the image with fewer measurements, in other words, a more efficient imaging without sacrificing image quality significantly.
Single-pixel imaging is a form of imaging where a single detector is used to measure light levels after the sample has been illuminated with patterns to achieve efficient and compressive measurements. Noiselet is implemented to increase the computational efficiency by following the principle of compressive sensing. The following is an overview of how noiselet is applied to single-pixel imaging:
The noiselet transform matrix is applied to the structured illumination patterns, and spreads the signal information across the measurement space. The structured patterns leads to a sparse representation of the signal information. This allows the reconstruction step of the image from a reduced set of measurements, while still encapsulates the essential information required to reconstruct an image with good quality compared to the original's. The benefits brought by noiselet can be concluded as:
|
https://en.wikipedia.org/wiki/Noiselet
|
TheGlobal Consciousness Project(GCP, also called theEGG Project) is aparapsychologyexperiment begun in 1998 as an attempt to detect possible interactions of "globalconsciousness" with physical systems. The project monitors a geographically distributed network ofhardware random number generatorsin a bid to identify anomalous outputs that correlate with widespread emotional responses to sets of world events, or periods of focused attention by large numbers of people. The GCP is privately funded through theInstitute of Noetic Sciences[1]and describes itself as an international collaboration of about 100 research scientists and engineers.
Skepticssuch asRobert T. Carroll, Claus Larsen, and others have questioned the methodology of the Global Consciousness Project, particularly how the data are selected and interpreted,[2][3]saying the data anomalies reported by the project are the result of "pattern matching" andselection biaswhich ultimately fail to support a belief inpsior global consciousness.[4]But in analyzing the data for 11 September 2001, May et al. concluded that the statistically significant result given by the published GCP hypothesis was fortuitous, and found that as far as this particular event was concerned an alternative method of analysis gave only chance deviations throughout.[5]: 2
Roger D. Nelsondeveloped the project as an extrapolation of two decades of experiments from the controversialPrinceton Engineering Anomalies Research Lab(PEAR).[6]
Nelson began usingrandom event generator(REG) technology in the field to study effects of special states ofgroup consciousness.[7]
In an extension of the laboratory research utilizinghardware Random Event Generators(REG)[8]called FieldREG, investigators examined the outputs of REGs in the field before, during and after highly focused or coherent group events. The group events studied included psychotherapy sessions, theater presentations, religious rituals, sports competitions such as theFootball World Cup, and television broadcasts such as theAcademy Awards.[9]
FieldREG was extended to global dimensions in studies looking at data from 12 independent REGs in the US and Europe during a web-promoted "Gaiamind Meditation" in January 1997, and then again in September 1997 after thedeath of Diana, Princess of Wales. The project claimed the results suggested it would be worthwhile to build a permanent network of continuously-running REGs.[10][non-primary source needed]This became the EGG project or Global Consciousness Project.
Comparing the GCP to PEAR, Nelson, referring to the "field" studies with REGs done by PEAR, said the GCP used "exactly the same procedure... applied on a broader scale."[11][non-primary source needed]
The GCP's methodology is based on the hypothesis that events which elicit widespread emotion or draw the simultaneous attention of large numbers of people may affect the output of hardware random number generators in astatistically significantway. The GCP maintains a network ofhardware random number generatorswhich are interfaced to computers at 70 locations around the world. Custom software reads the output of the random number generators and records a trial (sum of 200 bits) once every second. The data are sent to a server in Princeton, creating a database of synchronized parallel sequences of random numbers. The GCP is run as a replication experiment, essentially combining the results of many distinct tests of the hypothesis. The hypothesis is tested by calculating the extent of data fluctuations at the time of events. The procedure is specified by a three-step experimental protocol. In the first step, the event duration and the calculation algorithm are pre-specified and entered into a formal registry.[12][non-primary source needed]In the second step, the event data are extracted from the database and aZ score, which indicates the degree of deviation from the null hypothesis, is calculated from the pre-specified algorithm. In the third step, the event Z-score is combined with the Z-scores from previous events to yield an overall result for the experiment.
The remote devices have been dubbedPrinceton Eggs, a reference to the coinageelectrogaiagram(EGG), aportmanteauofelectroencephalogramandGaia.[13][non-primary source needed]Supporters and skeptics have referred to the aim of the GCP as being analogous to detecting "a great disturbance inthe Force."[2][14][15]
The GCP has suggested that changes in the level of randomness may have occurred during theSeptember 11, 2001 attackswhen the planes first impacted, as well as in the two days following the attacks.[16][non-primary source needed]
Independent scientists Edwin May and James Spottiswoode conducted an analysis of the data around theSeptember 11 attacksand concluded there was no statistically significant change in the randomness of the GCP data during the attacks and the apparent significant deviation reported by Nelson and Radin existed only in their chosen time window.[5]Spikes and fluctuations are to be expected in any random distribution of data, and there is no set time frame for how close a spike has to be to a given event for the GCP to say they have found a correlation.[5]Wolcotte Smith said "A couple of additional statistical adjustments would have to be made to determine if there really was a spike in the numbers," referencing the data related to September 11, 2001.[17]Similarly, Jeffrey D. Scargle believes unless bothBayesianand classicalp-valueanalysis agree and both show the same anomalous effects, the kind of result GCP proposes will not be generally accepted.[18]
In 2003, aNew York Timesarticle concluded "All things considered at this point, the stock market seems a more reliable gauge of the national—if not the global—emotional resonance."[19]
In 2007,The Agereported that "[Nelson] concedes the data, so far, is not solid enough for global consciousness to be said to exist at all. It is not possible, for example, to look at the data and predict with any accuracy what (if anything) the eggs may be responding to."[20]
Robert Matthewssaid that while it was "the most sophisticated attempt yet" to prove psychokinesis existed, the unreliability of significant events to cause statistically significant spikes meant that "the only conclusion to emerge from the Global Consciousness Project so far is that data without a theory is as meaningless as words without a narrative".[21]
Peter Bancel reviews the data in a 2017 article and "finds that the data do not support the global consciousness proposal" and rather "All of the tests favor the interpretation of a goal-oriented effect."[22]
Roger D. Nelsonis an American parapsychologist and researcher and the director of the GCP.[23]From 1980 to 2002, he was Coordinator of Research at thePrinceton Engineering Anomalies Research(PEAR) laboratory at Princeton University.[24]His professional focus was the study ofconsciousnessandintentionand the role of the mind in the physical world. His work integratesscienceandspirituality[citation needed], including research that is directly focused on numinous communal experiences.[25]
Nelson's professional degrees are in experimentalcognitive psychology.[25]Until his retirement in 2002, he served as the coordinator of experimental work in thePrinceton Engineering Anomalies Research Lab(PEAR), directed byRobert Jahnin the department ofMechanical and Aerospace Engineering, School of Engineering/Applied Science,Princeton University.[26]
|
https://en.wikipedia.org/wiki/Global_Consciousness_Project
|
Windows App SDK(formerly known asProject Reunion)[3]is asoftware development kit(SDK) fromMicrosoftthat provides a unified set of APIs and components that can be used to developdesktop applicationsfor bothWindows 11andWindows 10version 1809 and later. The purpose of this project is to offer a decoupled implementation of capabilities which were previously tightly-coupled to the UWP app model.[4]Windows App SDK allows nativeWin32(USER32/GDI32) or.NET(WPF/WinForms) developers alike a path forward to enhance their apps with modern features.[4]
It follows that Windows App SDK is not intended to replace theWindows SDK.[4]By exposing a commonapplication programming interface(API) primarily using theWindows Runtime(WinRT) through generatedWinMDmetadata, the tradeoffs which once characterized either app model are largely eliminated.NuGetpackages for version 1.4 were released in August 2023 after approximately four months of development.[5]
While Microsoft has developed a number of new features, some of the features listed below are abstractions of functionality provided by existing APIs.[4]
Most of the investment[6]into the decoupled UI stack[7]has gone towards bug fixes, improvements to the debugging experience, and simplifying the window management capabilities made possible by switching from CoreWindow. An API abstracting USER32/GDI32 primitives known asAppWindowwas introduced to expose a unified set of windowing capabilities[8]and enable support for custom window controls.
A replacement for the UWP WebView control was announced early on.[9]This is because it was based on anunsupported browser engine.[10]A newChromium-based control, namedWebView2, was developed and can be used from WinUI as well as other supported app types.
WhileMSIXis included in the Windows App SDK and considered to be the recommended application packaging format,[11][12]a design goal was to allow for unpackaged apps. These apps can be deployed as self-contained or framework-dependent. Support for dynamic loading of app dependencies is included for both packaged and unpackaged apps.[13]
DWriteCoreis being developed as a decoupled and device-independent solution for high-quality text rendering.[14]Win2Dhas also been made available to WinUI 3 apps.[15]
MRT Coreallows for management of appresourcesfor purposes such as localization. It is a decoupled version of the resource management system from UWP.[16]
With the stable releases delivered after its initial launch, Windows App SDK now supports several app lifecycle features which previously required a considerable amount of effort for developers to implement in Win32 applications. These features includepower managementnotifications, rich activation, multiple instances, and programmatic app restart.[17]
Support forpush notificationswas initially implemented as a limited-access, preview feature.[18]However, the APIs for it have since been stabilized and push notifications can be delivered to app users. Official documentation states that access to the feature can be revoked by Microsoft at their discretion.[18][19]Additionally, apps can now easily display local app notifications without the need to create anXMLpayload.[20]
Third-party integration with the Windows Widgets system in Windows 11 has been included as part of the stable release channel.[21]Developers can design custom widgets for their app using adaptive cards[22]and surface them on the widgets board.[23]
|
https://en.wikipedia.org/wiki/Windows_App_SDK
|
Inmathematics,modular arithmeticis a system ofarithmeticoperations forintegers, other than the usual ones from elementary arithmetic, where numbers "wrap around" when reaching a certain value, called themodulus. The modern approach to modular arithmetic was developed byCarl Friedrich Gaussin his bookDisquisitiones Arithmeticae, published in 1801.
A familiar example of modular arithmetic is the hour hand on a12-hour clock. If the hour hand points to 7 now, then 8 hours later it will point to 3. Ordinary addition would result in7 + 8 = 15, but 15 reads as 3 on the clock face. This is because the hour hand makes one rotation every 12 hours and the hour number starts over when the hour hand passes 12. We say that 15 iscongruentto 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12).
Similarly, if one starts at 12 and waits 8 hours, the hour hand will be at 8. If one instead waited twice as long, 16 hours, the hour hand would be on 4. This can be written as 2 × 8 ≡ 4 (mod 12). Note that after a wait of exactly 12 hours, the hour hand will always be right where it was before, so 12 acts the same as zero, thus 12 ≡ 0 (mod 12).
Given anintegerm≥ 1, called amodulus, two integersaandbare said to becongruentmodulom, ifmis adivisorof their difference; that is, if there is an integerksuch that
Congruence modulomis acongruence relation, meaning that it is anequivalence relationthat is compatible withaddition,subtraction, andmultiplication. Congruence modulomis denoted by
The parentheses mean that(modm)applies to the entire equation, not just to the right-hand side (here,b).
This notation is not to be confused with the notationbmodm(without parentheses), which refers to the remainder ofbwhen divided bym, known as themodulooperation: that is,bmodmdenotes the unique integerrsuch that0 ≤r<mandr≡b(modm).
The congruence relation may be rewritten as
explicitly showing its relationship withEuclidean division. However, thebhere need not be the remainder in the division ofabym.Rather,a≡b(modm)asserts thataandbhave the sameremainderwhen divided bym. That is,
where0 ≤r<mis the common remainder. We recover the previous relation (a−b=k m) by subtracting these two expressions and settingk=p−q.
Because the congruence modulomis defined by thedivisibilitybymand because−1is aunitin the ring of integers, a number is divisible by−mexactly if it is divisible bym.
This means that every non-zero integermmay be taken as modulus.
In modulus 12, one can assert that:
because the difference is38 − 14 = 24 = 2 × 12, a multiple of12. Equivalently,38and14have the same remainder2when divided by12.
The definition of congruence also applies to negative values. For example:
The congruence relation satisfies all the conditions of anequivalence relation:
Ifa1≡b1(modm)anda2≡b2(modm), or ifa≡b(modm), then:[1]
Ifa≡b(modm), then it is generally false thatka≡kb(modm). However, the following is true:
For cancellation of common terms, we have the following rules:
The last rule can be used to move modular arithmetic into division. Ifbdividesa, then(a/b) modm= (amodb m) /b.
Themodular multiplicative inverseis defined by the following rules:
The multiplicative inversex≡a−1(modm)may be efficiently computed by solvingBézout's equationa x+m y= 1forx,y, by using theExtended Euclidean algorithm.
In particular, ifpis a prime number, thenais coprime withpfor everyasuch that0 <a<p; thus a multiplicative inverse exists for allathat is not congruent to zero modulop.
Some of the more advanced properties of congruence relations are the following:
The congruence relation is anequivalence relation. Theequivalence classmodulomof an integerais the set of all integers of the forma+k m, wherekis any integer. It is called thecongruence classorresidue classofamodulom, and may be denoted(amodm), or asaor[a]when the modulusmis known from the context.
Each residue class modulomcontains exactly one integer in the range0,...,|m|−1{\displaystyle 0,...,|m|-1}. Thus, these|m|{\displaystyle |m|}integers arerepresentativesof their respective residue classes.
It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes.
Consequently,(amodm)denotes generally the unique integerrsuch that0 ≤r<mandr≡a(modm); it is called theresidueofamodulom.
In particular,(amodm) = (bmodm)is equivalent toa≡b(modm), and this explains why "=" is often used instead of "≡" in this context.
Each residue class modulommay be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2](since this is the proper remainder which results from division). Any two members of different residue classes modulomare incongruent modulom. Furthermore, every integer belongs to one and only one residue class modulom.[3]
The set of integers{0, 1, 2, ...,m− 1}is called theleast residue system modulom. Any set ofmintegers, no two of which are congruent modulom, is called acomplete residue system modulom.
The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely onerepresentativeof each residue class modulom.[4]For example, the least residue system modulo4is{0, 1, 2, 3}. Some other complete residue systems modulo4include:
Some sets that arenotcomplete residue systems modulo 4 are:
Given theEuler's totient functionφ(m), any set ofφ(m)integers that arerelatively primetomand mutually incongruent under modulusmis called areduced residue system modulom.[5]The set{5, 15}from above, for example, is an instance of a reduced residue system modulo 4.
Covering systems represent yet another type of residue system that may contain residues with varying moduli.
In the context of this paragraph, the modulusmis almost always taken as positive.
The set of allcongruence classesmodulomis aringcalled thering of integers modulom, and is denotedZ/mZ{\textstyle \mathbb {Z} /m\mathbb {Z} },Z/m{\displaystyle \mathbb {Z} /m}, orZm{\displaystyle \mathbb {Z} _{m}}.[6]The ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is fundamental to various branches of mathematics (see§ Applicationsbelow).
(In some parts ofnumber theorythe notationZm{\displaystyle \mathbb {Z} _{m}}is avoided because it can be confused with the set ofm-adic integers.)
Form> 0one has
Whenm= 1,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is thezero ring; whenm= 0,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is not anempty set; rather, it isisomorphictoZ{\displaystyle \mathbb {Z} }, sincea0= {a}.
Addition, subtraction, and multiplication are defined onZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }by the following rules:
The properties given before imply that, with these operations,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acommutative ring. For example, in the ringZ/24Z{\displaystyle \mathbb {Z} /24\mathbb {Z} }, one has
as in the arithmetic for the 24-hour clock.
The notationZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is used because this ring is thequotient ringofZ{\displaystyle \mathbb {Z} }by theidealmZ{\displaystyle m\mathbb {Z} }, the set formed by all multiples ofm, i.e., all numbersk mwithk∈Z.{\displaystyle k\in \mathbb {Z} .}
Under addition,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acyclic group. All finite cyclic groups are isomorphic withZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }for somem.[7]
The ring of integers modulomis afield, i.e., every nonzero element has amultiplicative inverse, if and only ifmisprime. Ifm=pkis aprime powerwithk> 1, there exists a unique (up to isomorphism) finite fieldGF(m)=Fm{\displaystyle \mathrm {GF} (m)=\mathbb {F} _{m}}withmelements, which isnotisomorphic toZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }, which fails to be a field because it haszero-divisors.
Ifm> 1,(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}denotes themultiplicative group of the integers modulomthat are invertible. It consists of the congruence classesam, whereais coprimetom; these are precisely the classes possessing a multiplicative inverse. They form anabelian groupunder multiplication; its order isφ(m), whereφisEuler's totient function.
In pure mathematics, modular arithmetic is one of the foundations ofnumber theory, touching on almost every aspect of its study, and it is also used extensively ingroup theory,ring theory,knot theory, andabstract algebra. In applied mathematics, it is used incomputer algebra,cryptography,computer science,chemistryand thevisualandmusicalarts.
A very practical application is to calculate checksums within serial number identifiers. For example,International Standard Book Number(ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise,International Bank Account Numbers(IBANs) use modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of theCAS registry number(a unique identifying number for each chemical compound) is acheck digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10.
In cryptography, modular arithmetic directly underpinspublic keysystems such asRSAandDiffie–Hellman, and providesfinite fieldswhich underlieelliptic curves, and is used in a variety ofsymmetric key algorithmsincludingAdvanced Encryption Standard(AES),International Data Encryption Algorithm(IDEA), andRC4. RSA and Diffie–Hellman usemodular exponentiation.
In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used inpolynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations ofpolynomial greatest common divisor, exactlinear algebraandGröbner basisalgorithms over the integers and the rational numbers. As posted onFidonetin the 1980s and archived atRosetta Code, modular arithmetic was used to disproveEuler's sum of powers conjectureon aSinclair QLmicrocomputerusing just one-fourth of the integer precision used by aCDC 6600supercomputerto disprove it two decades earlier via abrute force search.[8]
In computer science, modular arithmetic is often applied inbitwise operationsand other operations involving fixed-width, cyclicdata structures. The modulo operation, as implemented in manyprogramming languagesandcalculators, is an application of modular arithmetic that is often used in this context. The logical operatorXORsums 2 bits, modulo 2.
The use oflong divisionto turn a fraction into arepeating decimalin any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10.
In music, arithmetic modulo 12 is used in the consideration of the system oftwelve-tone equal temperament, whereoctaveandenharmonicequivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharpis considered the same as D-flat).
The method ofcasting out ninesoffers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9).
Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular,Zeller's congruenceand theDoomsday algorithmmake heavy use of modulo-7 arithmetic.
More generally, modular arithmetic also has application in disciplines such aslaw(e.g.,apportionment),economics(e.g.,game theory) and other areas of thesocial sciences, whereproportionaldivision and allocation of resources plays a central part of the analysis.
Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved inpolynomial timewith a form ofGaussian elimination, for details seelinear congruence theorem. Algorithms, such asMontgomery reduction, also exist to allow simple arithmetic operations, such as multiplication andexponentiation modulom, to be performed efficiently on large numbers.
Some operations, like finding adiscrete logarithmor aquadratic congruenceappear to be as hard asinteger factorizationand thus are a starting point forcryptographic algorithmsandencryption. These problems might beNP-intermediate.
Solving a system of non-linear modular arithmetic equations isNP-complete.[9]
|
https://en.wikipedia.org/wiki/Integers_modulo_n
|
Instatistics, thematrix variate Dirichlet distributionis a generalization of thematrix variate beta distributionand of theDirichlet distribution.
SupposeU1,…,Ur{\displaystyle U_{1},\ldots ,U_{r}}arep×p{\displaystyle p\times p}positive definite matriceswithIp−∑i=1rUi{\displaystyle I_{p}-\sum _{i=1}^{r}U_{i}}also positive-definite, whereIp{\displaystyle I_{p}}is thep×p{\displaystyle p\times p}identity matrix. Then we say that theUi{\displaystyle U_{i}}have a matrix variate Dirichlet distribution,(U1,…,Ur)∼Dp(a1,…,ar;ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r};a_{r+1}\right)}, if their jointprobability density functionis
whereai>(p−1)/2,i=1,…,r+1{\displaystyle a_{i}>(p-1)/2,i=1,\ldots ,r+1}andβp(⋯){\displaystyle \beta _{p}\left(\cdots \right)}is themultivariate beta function.
If we writeUr+1=Ip−∑i=1rUi{\displaystyle U_{r+1}=I_{p}-\sum _{i=1}^{r}U_{i}}then the PDF takes the simpler form
on the understanding that∑i=1r+1Ui=Ip{\displaystyle \sum _{i=1}^{r+1}U_{i}=I_{p}}.
SupposeSi∼Wp(ni,Σ),i=1,…,r+1{\displaystyle S_{i}\sim W_{p}\left(n_{i},\Sigma \right),i=1,\ldots ,r+1}are independently distributedWishartp×p{\displaystyle p\times p}positive definite matrices. Then, definingUi=S−1/2Si(S−1/2)T{\displaystyle U_{i}=S^{-1/2}S_{i}\left(S^{-1/2}\right)^{T}}(whereS=∑i=1r+1Si{\displaystyle S=\sum _{i=1}^{r+1}S_{i}}is the sum of the matrices andS1/2(S−1/2)T{\displaystyle S^{1/2}\left(S^{-1/2}\right)^{T}}is any reasonable factorization ofS{\displaystyle S}), we have
If(U1,…,Ur)∼Dp(a1,…,ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r+1}\right)}, and ifs≤r{\displaystyle s\leq r}, then:
Also, with the same notation as above, the density of(Us+1,…,Ur)|(U1,…,Us){\displaystyle \left(U_{s+1},\ldots ,U_{r}\right)\left|\left(U_{1},\ldots ,U_{s}\right)\right.}is given by
where we writeUr+1=Ip−∑i=1rUi{\displaystyle U_{r+1}=I_{p}-\sum _{i=1}^{r}U_{i}}.
Suppose(U1,…,Ur)∼Dp(a1,…,ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r+1}\right)}and suppose thatS1,…,St{\displaystyle S_{1},\ldots ,S_{t}}is a partition of[r+1]={1,…r+1}{\displaystyle \left[r+1\right]=\left\{1,\ldots r+1\right\}}(that is,∪i=1tSi=[r+1]{\displaystyle \cup _{i=1}^{t}S_{i}=\left[r+1\right]}andSi∩Sj=∅{\displaystyle S_{i}\cap S_{j}=\emptyset }ifi≠j{\displaystyle i\neq j}). Then, writingU(j)=∑i∈SjUi{\displaystyle U_{(j)}=\sum _{i\in S_{j}}U_{i}}anda(j)=∑i∈Sjai{\displaystyle a_{(j)}=\sum _{i\in S_{j}}a_{i}}(withUr+1=Ip−∑i=1rUr{\displaystyle U_{r+1}=I_{p}-\sum _{i=1}^{r}U_{r}}), we have:
Suppose(U1,…,Ur)∼Dp(a1,…,ar+1){\displaystyle \left(U_{1},\ldots ,U_{r}\right)\sim D_{p}\left(a_{1},\ldots ,a_{r+1}\right)}. Define
whereU11(i){\displaystyle U_{11(i)}}isp1×p1{\displaystyle p_{1}\times p_{1}}andU22(i){\displaystyle U_{22(i)}}isp2×p2{\displaystyle p_{2}\times p_{2}}. Writing theSchur complementU22⋅1(i)=U21(i)U11(i)−1U12(i){\displaystyle U_{22\cdot 1(i)}=U_{21(i)}U_{11(i)}^{-1}U_{12(i)}}we have
and
A. K. Gupta and D. K. Nagar 1999. "Matrix variate distributions". Chapman and Hall.
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Matrix_variate_Dirichlet_distribution
|
TheEuthyphro dilemmais found inPlato's dialogueEuthyphro, in whichSocratesasksEuthyphro, "Is thepious(τὸ ὅσιον) loved by thegodsbecause it is pious, or is it pious because it is loved by the gods?" (10a)
Although it was originally applied to the ancientGreek pantheon, the dilemma has implications for modernmonotheistic religions.Gottfried Leibnizasked whether the good and just "is good and just because God wills it or whether God wills it because it is good and just".[1]Ever since Plato's original discussion, this question has presented a problem for some theists, though others have thought it afalse dilemma, and it continues to be an object of theological and philosophical discussion today.
Socrates and Euthyphro discuss the nature of piety in Plato'sEuthyphro. Euthyphro proposes (6e) that the pious (τὸ ὅσιον) is the same thing as that which is loved by the gods (τὸ θεοφιλές), but Socrates finds a problem with this proposal: the gods may disagree among themselves (7e). Euthyphro then revises his definition, so that piety is only that which is loved by all of the gods unanimously (9e).
At this point thedilemmasurfaces. Socrates asks whether the gods love the pious because it is the pious, or whether the pious is pious only because it is loved by the gods (10a). Socrates and Euthyphro both contemplate the first option: surely the gods love the pious because it is the pious. But this means, Socrates argues, that we are forced to reject the second option: the fact that the gods love something cannot explain why the pious is the pious (10d). Socrates points out that if both options were true, they would yield a vicious circle, with the gods loving the pious because it is the pious, and the pious being the pious because the gods love it. And this, in turn, means Socrates argues, that the pious is not the same as the god-beloved, for what makes the pious the pious is not what makes the god-beloved the god-beloved. After all, what makes the god-beloved the god-beloved is that the gods love it, whereas what makes the pious the pious is something else (9d-11a). Thus Euthyphro's theory does not give us the verynatureof the pious, but at most aqualityof the pious (11ab).
The dilemma can be modified to apply to philosophical theism, where it is still the object of theological and philosophical discussion, largely within the Christian, Jewish, and Islamic traditions. AsGermanphilosopherandmathematicianGottfried Leibnizpresented this version of the dilemma: "It is generally agreed that whatever God wills is good and just. But there remains the question whether it is good and just because God wills it or whether God wills it because it is good and just; in other words, whether justice and goodness are arbitrary or whether they belong to the necessary and eternal truths about the nature of things."[2]Many philosophers and theologians have addressed the Euthyphro dilemma since the time of Plato, though not always with reference to the Platonic dialogue. According to scholarTerence Irwin, the issue and its connection with Plato was revived by Ralph Cudworth and Samuel Clarke in the 17th and 18th centuries.[3]More recently, it has received a great deal of attention from contemporary philosophers working inmetaethicsand thephilosophy of religion. Philosophers and theologians aiming to defend theism against the threat of the dilemma have developed a variety of responses.
The first horn of the dilemma (i.e. that which is right is commanded by Godbecause it is right) goes by a variety of names, includingintellectualism,rationalism,realism,naturalism, andobjectivism. Roughly, it is the view that there are independent moral standards: some actions are right or wrong in themselves, independent of God's commands. This is the view accepted by Socrates and Euthyphro in Plato's dialogue. TheMu'tazilahschool ofIslamic theologyalso defended the view (with, for example,Nazzammaintaining that God is powerless to engage in injustice or lying),[4]as did theIslamic philosopherAverroes.[5]Thomas Aquinasnever explicitly addresses the Euthyphro dilemma, but Aquinas scholars often put him on this side of the issue.[6][7]Aquinas draws a distinction between what is good or evil in itself and what is good or evil because of God's commands,[8]with unchangeable moral standards forming the bulk ofnatural law.[9]Thus he contends that not even God can change theTen Commandments(adding, however, that Godcanchange what individuals deserve in particular cases, in what might look like special dispensations to murder or stealing).[10]Among laterScholastics,Gabriel Vásquezis particularly clear-cut about obligations existing prior to anyone's will, even God's.[11][12]Modernnatural law theory sawGrotiusandLeibnizalso putting morality prior toGod's will, comparing moral truths to unchangeable mathematical truths, and engagingvoluntaristslikePufendorfin philosophical controversy.[13]Cambridge PlatonistslikeBenjamin WhichcoteandRalph Cudworthmounted seminal attacks on voluntarist theories, paving the way for the later rationalistmetaethicsofSamuel ClarkeandRichard Price;[14][15][16]what emerged was a view on which eternal moral standards, though dependent on God in some way, exist independently of God's will and prior to God's commands. Contemporaryphilosophers of religionwho embrace this horn of the Euthyphro dilemma includeRichard Swinburne[17][18]andT. J. Mawson[19](though see below for complications).
Contemporary philosophers Joshua Hoffman and Gary S. Rosenkrantz take the first horn of the dilemma, branding divine command theory a "subjective theory of value" that makes morality arbitrary.[30]They accept a theory of morality on which, "right and wrong, good and bad, are in a sense independent of whatanyonebelieves, wants, or prefers."[31]They do not address the problems mentioned above with the first horn, but do consider a related problem concerning God's omnipotence: namely, that it might be handicapped by his inability to bring about what is independently evil. To this they reply that God is omnipotent, even though there are states of affairs he cannot bring about: omnipotence is a matter of "maximal power", not an ability to bring about all possible states of affairs. And supposing that it is impossible for God not to exist, then since there cannot be more than one omnipotent being, it is therefore impossible for any being to have more power than God (e.g., a being who is omnipotent but notomnibenevolent). Thus God's omnipotence remains intact.[32]
Richard SwinburneandT. J. Mawsonhave a slightly more complicated view. They both take the first horn of the dilemma when it comes tonecessarymoral truths. But divine commands are not totally irrelevant, for God and his will can still effectcontingentmoral truths.[33][34][18][19]On the one hand, the most fundamental moral truths hold true regardless of whether God exists or what God has commanded: "Genocide and torturing children are wrong and would remain so whatever commands any person issued."[24]This is because, according to Swinburne, such truths are true as a matter oflogical necessity: like the laws of logic, one cannot deny them without contradiction.[35]This parallel offers a solution to the aforementioned problems of God's sovereignty, omnipotence, and freedom: namely, that these necessary truths of morality pose no more of a threat than the laws of logic.[36][37][38]On the other hand, there is still an important role for God's will. First, there are some divine commands that can directly create moral obligations: e.g., the command to worship on Sundays instead of on Tuesdays.[39]Notably, not even these commands, for which Swinburne and Mawson take the second horn of the dilemma, have ultimate, underived authority. Rather, they create obligations only because of God's role as creator and sustainer and indeed owner of the universe, together with the necessary moral truth that we owe some limited consideration to benefactors and owners.[40][41]Second, God can make anindirectmoral difference by deciding what sort of universe to create. For example, whether a public policy is morally good might indirectly depend on God's creative acts: the policy's goodness or badness might depend on its effects, and those effects would in turn depend on the sort of universe God has decided to create.[42][43]
The second horn of the dilemma (i.e. that which is right is rightbecause it is commanded by God) is sometimes known asdivine command theoryorvoluntarism. Roughly, it is the view that there are no moral standards other than God's will: without God's commands, nothing would be right or wrong. This view was partially defended byDuns Scotus, who argued that not allTen Commandmentsbelong to theNatural Lawin the strictest sense.[44]Scotus held that while our duties to God (the first three commandments, traditionally thought of as the First Tablet) areself-evident,true by definition, and unchangeable even by God, our duties to others (found on the second tablet) were arbitrarily willed by God and are within his power to revoke and replace (although, the third commandment, to honour the Sabbath and keep it holy, has a little of both, as we are absolutely obliged to render worship to God, but there is no obligation in natural law to do it on this day or that). Scotus does note, however that the last seven commandments "are highly consonant with [the natural law], though they do not follow necessarily from first practical principles that are known in virtue of their terms and are necessarily known by any intellect [that understands their terms. And it is certain that all the precepts of the second table belong to the natural law in this second way, since their rectitude is highly consonant with first practical principles that are known necessarily".[45][46][47][48]Scotus justifies this position with the example of a peaceful society, noting that the possession of private property is not necessary to have a peaceful society, but that "those of weak character" would be more easily made peaceful with private property than without.
William of Ockhamwent further, contending that (since there is no contradiction in it) God could command us not to love God[49]and even tohateGod.[50]LaterScholasticslikePierre D'Aillyand his studentJean de Gersonexplicitly confronted the Euthyphro dilemma, taking the voluntarist position that God does not "command good actions because they are good or prohibit evil ones because they are evil; but... these are therefore good because they are commanded and evil because prohibited."[51]ProtestantreformersMartin LutherandJohn Calvinboth stressed the absolute sovereignty of God's will, with Luther writing that "for [God's] will there is no cause or reason that can be laid down as a rule or measure for it",[52]and Calvin writing that "everything which [God] wills must be held to be righteous by the mere fact of his willing it."[53]The voluntarist emphasis on God's absolute power was carried further byDescartes, who notoriously held that God had freely created the eternal truths oflogicandmathematics, and that God was therefore capable of givingcirclesunequalradii,[54]givingtrianglesother than 180 internal degrees, and even makingcontradictionstrue.[55]Descartes explicitly seconded Ockham: "why should [God] not have been able to give this command [i.e., the command to hate God] to one of his creatures?"[56]Thomas Hobbesnotoriously reduced the justice of God to "irresistible power"[57](drawing the complaint ofBishop Bramhallthat this "overturns... all law").[58]AndWilliam Paleyheld that all moral obligations bottom out in the self-interested "urge" to avoidHelland enterHeavenby acting in accord with God's commands.[59]Islam'sAsh'arite theologians,al-Ghazaliforemost among them, embraced voluntarism: scholar George Hourani writes that the view "was probably more prominent and widespread in Islam than in any other civilization."[60][61]Wittgensteinsaid that of "the two interpretations of the Essence of the Good", that which holds that "the Good is good, in virtue of the fact that God wills it" is "the deeper", while that which holds that "God wills the good, because it is good" is "the shallow, rationalistic one, in that it behaves 'as though' that which is good could be given some further foundation".[62]Today, divine command theory is defended by many philosophers of religion, though typically in a restricted form (seebelow).
This horn of the dilemma also faces several problems:
One common response to the Euthyphro dilemma centers on a distinction betweenvalueandobligation. Obligation, which concerns rightness and wrongness (or what is required, forbidden, or permissible), is given a voluntarist treatment. But value, which concerns goodness and badness, is treated as independent of divine commands. The result is arestricteddivine command theory that applies only to a specific region of morality: thedeonticregion of obligation. This response is found inFrancisco Suárez's discussion of natural law and voluntarism inDe legibus[85]and has been prominent in contemporary philosophy of religion, appearing in the work of Robert M. Adams,[86]Philip L. Quinn,[87]and William P. Alston.[88]
A significant attraction of such a view is that, since it allows for a non-voluntarist treatment of goodness and badness, and therefore of God's own moral attributes, some of the aforementioned problems with voluntarism can perhaps be answered. God's commands are not arbitrary: there are reasons which guide his commands based ultimately on this goodness and badness.[89]God could not issue horrible commands: God's own essential goodness[81][90][91]or loving character[92]would keep him from issuing any unsuitable commands. Our obligation to obey God's commands does not result incircular reasoning; it might instead be based on a gratitude whose appropriateness is itself independent of divine commands.[93]These proposed solutions are controversial,[94]and some steer the view back into problems associated with the first horn.[95]
One problem remains for such views: if God's own essential goodness does not depend on divine commands, then the question regards what itdoesdepend on. Perhaps something other than God. Here the restricted divine command theory is commonly combined with a view reminiscent of Plato: God is identical to the ultimate standard for goodness.[96]Alston offers the analogy ofthe standard meter bar in France. Something is a meter long inasmuch as it is the same length as the standard meter bar, and likewise, something is good inasmuch as it approximates God. If one asks whyGodis identified as the ultimate standard for goodness, Alston replies that this is "the end of the line," with no further explanation available, but adds that this is no more arbitrary than a view that invokes a fundamental moral standard.[97]On this view, then, even though goodness is independent of God'swill, it still depends onGod, and thus God's sovereignty remains intact.
This solution has been criticized byWes Morriston. If we identify the ultimate standard for goodness with God's nature, then it seems we are identifying it with certain properties of God (e.g., being loving, being just). If so, then the dilemma resurfaces: God is either good because he has those properties, or those properties are good because God has them.[98]Nevertheless, Morriston concludes that the appeal to God's essential goodness is the divine-command theorist's best bet. To produce a satisfying result, however, it would have to give an account of God's goodness that does not trivialize it and does not make God subject to an independent standard of goodness.[99]
Moral philosopherPeter Singer, disputing the perspective that "God is good" and could never advocate something like torture, states that those who propose this are "caught in a trap of their own making, for what can they possibly mean by the assertion that God is good? That God is approved of by God?"[100]
Augustine,Anselm, and Aquinas all wrote about the problems raised by the Euthyphro dilemma, although, likeWilliam James[101]and Wittgenstein[62]later, they did not mention it by name. As philosopher and Anselm scholar Katherin A. Rogers observes, many contemporary philosophers of religion suppose that there are true propositions which exist as platonicabstractaindependently of God.[102]Among these are propositions constituting a moral order, to which God must conform in order to be good.[103]ClassicalJudaeo-Christiantheism, however, rejects such a view as inconsistent with God's omnipotence, which requires that God and what he has made is all that there is.[102]"The classical tradition," Rogers notes, "also steers clear of the other horn of the Euthyphro dilemma, divine command theory."[104]From a classical theistic perspective, therefore, the Euthyphro dilemma is false. As Rogers puts it, "Anselm, like Augustine before him and Aquinas later, rejects both horns of the Euthyphro dilemma. God neither conforms to nor invents the moral order. Rather His very nature is the standard for value."[102]Another criticism raised byPeter Geachis that the dilemma implies you must search for a definition that fits piety rather than work backwards by deciding pious acts (i.e. you must know what piety is before you can list acts which are pious).[105]It also implies something can not be pious if it is only intended to serve the Gods without actually fulfilling any useful purpose.
The basis of the false dilemma response—God's nature is the standard for value—predates the dilemma itself, appearing first in the thought of the eighth-century BCHebrewprophets,Amos,Hosea,MicahandIsaiah. (Amos lived some three centuries before Socrates and two beforeThales, traditionally regarded as the first Greek philosopher.) "Their message," writes British scholarNorman H. Snaith, "is recognized by all as marking a considerable advance on all previous ideas,"[106]not least in its "special consideration for the poor and down-trodden."[107]As Snaith observes,tsedeq, the Hebrew word forrighteousness, "actually stands for the establishment of God's will in the land." This includes justice, but goes beyond it, "because God's will is wider than justice. He has a particular regard for the helpless ones on earth."[108]Tsedeq"is the norm by which all must be judged" and it "depends entirely upon the Nature of God."[109]
Hebrew has fewabstract nouns. What the Greeks thought of as ideas or abstractions, the Hebrews thought of as activities.[110]In contrast to the Greekdikaiosune(justice) of the philosophers,tsedeqis not an idea abstracted from this world of affairs. As Snaith writes:
Tsedeqis something that happens here, and can be seen, and recognized, and known. It follows, therefore, that when the Hebrew thought oftsedeq(righteousness), he did not think of Righteousness in general, or of Righteousness as an Idea. On the contrary, he thought of a particular righteous act, an action, concrete, capable of exact description, fixed in time and space.... If the word had anything like a general meaning for him, then it was as it was represented by a whole series of events, the sum-total of a number of particular happenings.[109]
The Hebrew stance on what came to be called theproblem of universals, as on much else, was very different from that of Plato and precluded anything like the Euthyphro dilemma.[111]This has not changed. In 2005,Jonathan Sackswrote, "In Judaism, the Euthyphro dilemma does not exist."[112]Jewish philosophers Avi Sagi and Daniel Statman criticized the Euthyphro dilemma as "misleading" because "it is not exhaustive": it leaves out a third option, namely that God "acts only out of His nature."[113]
In Aquinas' view, to speak of abstractions not only as existent, but as more perfect exemplars than fully designated particulars, is to put a premium on generality and vagueness.[114]On this analysis, the abstract "good" in the first horn of the Euthyphro dilemma is an unnecessary obfuscation. Aquinas frequently quoted with approval Aristotle's definition, "Good is what all desire."[115][116]As he clarified, "When we say that good is what all desire, it is not to be understood that every kind of good thing is desired by all, but that whatever is desired has the nature of good."[117]In other words, even those who desire evil desire it "only under the aspect of good," i.e., of what is desirable.[118]The difference between desiring good and desiring evil is that in the former, will and reason are in harmony, whereas in the latter, they are in discord.[119]
Aquinas's discussion ofsinprovides a good point of entry to his philosophical explanation of why the nature of God is the standard for value. "Every sin," he writes, "consists in the longing for a passing [i.e., ultimately unreal or false] good."[120]Thus, "in a certain sense it is true what Socrates says, namely that no one sins with full knowledge."[121]"No sin in the will happens without an ignorance of the understanding."[122]God, however, has full knowledge (omniscience) and therefore by definition (that of Socrates, Plato, and Aristotle as well as Aquinas) can never will anything other than what is good. It has been claimed – for instance, byNicolai Hartmann, who wrote: "There is no freedom for the good that would not be at the same time freedom for evil"[123]– that this would limit God's freedom, and therefore his omnipotence.Josef Pieper, however, replies that such arguments rest upon an impermissiblyanthropomorphicconception of God.[124]In the case of humans, as Aquinas says, to be able to sin is indeed a consequence,[125]or even a sign, of freedom (quodam libertatis signum).[126]Humans, in other words, are not puppets manipulated by God so that they always do what is right. However, "it does not belong to theessenceof the free will to be able to decide for evil."[127]"To will evil is neither freedom nor a part of freedom."[126]It is precisely humans' creatureliness – that is, their not being God and therefore omniscient – that makes them capable of sinning.[128]Consequently, writes Pieper, "the inability to sin should be looked on as the very signature of a higher freedom – contrary to the usual way of conceiving the issue."[124]Pieper concludes: "Onlythewill [i.e., God's] can be the right standard of its own willing and must will what is right necessarily, from within itself, and always. A deviation from the norm would not even be thinkable. And obviously only the absolute divine will is the right standard of its own act"[129][130]– and consequently of all human acts. Thus the second horn of the Euthyphro dilemma, divine command theory, is also disposed of.
Thomist philosopherEdward Feserwrites, "Divine simplicity [entails] that God's will just is God's goodness which just is His immutable and necessary existence. That means that what is objectively good and what God wills for us as morally obligatory are really the same thing considered under different descriptions, and that neither could have been other than they are. There can be no question then, either of God's having arbitrarily commanded something different for us (torturing babies for fun, or whatever) or of there being a standard of goodness apart from Him. Again, the Euthyphro dilemma is a false one; the third option that it fails to consider is that what is morally obligatory is what God commands in accordance with a non-arbitrary and unchanging standard of goodness that is not independent of Him... He is notunderthe moral law precisely because Heisthe moral law."[131]
William James, in his essay "The Moral Philosopher and the Moral Life", dismisses the first horn of the Euthyphro dilemma and stays clear of the second. He writes: "Our ordinary attitude of regarding ourselves as subject to an overarching system of moral relations, true 'in themselves,' is ... either an out-and-out superstition, or else it must be treated as a merely provisional abstraction from that real Thinker ... to whom the existence of the universe is due."[132]Moral obligations are created by "personal demands," whether these demands[133]come from the weakest creatures, from the most insignificant persons, or from God. It follows that "ethics have as genuine a foothold in a universe where the highest consciousness is human, as in a universe where there is a God as well." However, whether "the purely human system" works "as well as the other is a different question."[132]
For James, the deepest practical difference in the moral life is between what he calls "the easy-going and the strenuous mood."[134]In a purely human moral system, it is hard to rise above the easy-going mood, since the thinker's "various ideals, known to him to be mere preferences of his own, are too nearly of the same denominational value;[135]he can play fast and loose with them at will. This too is why, in a merely human world without a God, the appeal to our moral energy falls short of its maximum stimulating power." Our attitude is "entirely different" in a world where there are none but "finite demanders" from that in a world where there is also "an infinite demander." This is because "the stable and systematic moral universe for which the ethical philosopher asks is fully possible only in a world where there is a divine thinker with all-enveloping demands", for in that case, "actualized in his thought already must be that ethical philosophy which we seek as the pattern which our own must evermore approach." Even though "exactly what the thought of this infinite thinker may be is hidden from us", our postulation of him serves "to let loose in us the strenuous mood"[134]and confront us with anexistential[136]"challenge" in which "our total character and personal genius ... are on trial; and if we invoke any so-called philosophy, our choice and use of that also are but revelations of our personal aptitude or incapacity for moral life. From this unsparing practical ordeal no professor's lectures and no array of books can save us."[134]In the words ofRichard M. Gale, "God inspires us to lead the morally strenuous life in virtue of our conceiving of him as unsurpassablygood. This supplies James with an adequate answer to the underlying question of theEuthyphro."[137]
Alexander Rosenberguses a version of the Euthyphro dilemma to argue that objective morality cannot exist and hence an acceptance ofmoral nihilismis warranted.[138]He asks, is objective morality correct because evolution discovered it or did evolution discover objective morality because it is correct? If the first horn of the dilemma is true then our current morality cannot be objectively correct by accident because if evolution had given us another type of morality then that would have been objectively correct. If the second horn of dilemma is true then one must account for how the random process of evolution managed to only select for objectively correct moral traits while ignoring the wrong moral traits. Given the knowledge that evolution has given us tendencies to be xenophobic and sexist it is mistaken to claim that evolution has only selected for objective morality as evidently it did not. Because both horns of the dilemma do not give an adequate account for how the evolutionary process instantiated objective morality in humans, a position ofMoral nihilismis warranted.
Yale Law SchoolProfessorMyres S. McDougal, formerly a classicist, later a scholar of property law, posed the question, "Do we protect it because it's a property right, or is it a property right because we protect it?"[139]The dilemma has also been restated in legal terms byGeoffrey Hodgson, who asked: "Does a state make a law because it is a customary rule, or does law become a customary rule because it is approved by the state?"[140]
|
https://en.wikipedia.org/wiki/Euthyphro_dilemma
|
Intelecommunicationsandcomputer networking,multiplexing(sometimes contracted tomuxing) is a method by which multipleanalogordigital signalsare combined into one signal over ashared medium. The aim is to share a scarce resource—a physicaltransmission medium.[citation needed]For example, in telecommunications, severaltelephone callsmay be carried using one wire. Multiplexing originated intelegraphyin the 1870s, and is now widely applied in communications. Intelephony,George Owen Squieris credited with the development of telephone carrier multiplexing in 1910.
The multiplexed signal is transmitted over a communication channel such as a cable. The multiplexing divides the capacity of the communication channel into several logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, extracts the original channels on the receiver end.
A device that performs the multiplexing is called amultiplexer(MUX), and a device that performs the reverse process is called ademultiplexer(DEMUX or DMX).
Inverse multiplexing(IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream.
Incomputing,I/O multiplexingcan also be used to refer to the concept of processing multipleinput/outputeventsfrom a singleevent loop, with system calls likepoll[1]andselect (Unix).[2]
Multiplevariable bit ratedigitalbit streamsmay be transferred efficiently over a single fixedbandwidthchannel by means ofstatistical multiplexing. This is anasynchronousmode time-domain multiplexing which is a form of time-division multiplexing.
Digital bit streams can be transferred over an analog channel by means of code-division multiplexing techniques such asfrequency-hopping spread spectrum(FHSS) anddirect-sequence spread spectrum(DSSS).
Inwireless communications, multiplexing can also be accomplished through alternatingpolarization(horizontal/verticalorclockwise/counterclockwise) on eachadjacent channeland satellite, or throughphased multi-antenna arraycombined with amultiple-input multiple-output communications(MIMO) scheme.
In wired communication,space-division multiplexing, also known as space-division multiple access (SDMA) is the use of separate point-to-point electrical conductors for each transmitted channel. Examples include an analog stereo audio cable, with one pair of wires for the left channel and another for the right channel, and a multi-pairtelephone cable, a switchedstar networksuch as a telephone access network, a switched Ethernet network, and amesh network.
In wireless communication, space-division multiplexing is achieved with multiple antenna elements forming aphased array antenna. Examples aremultiple-input and multiple-output(MIMO), single-input and multiple-output (SIMO) and multiple-input and single-output (MISO) multiplexing. An IEEE 802.11g wireless router withkantennas makes it in principle possible to communicate withkmultiplexed channels, each with a peak bit rate of 54 Mbit/s, thus increasing the total peak bit rate by the factork. Different antennas would give differentmulti-path propagation(echo) signatures, making it possible fordigital signal processingtechniques to separate different signals from each other. These techniques may also be utilized forspace diversity(improved robustness to fading) orbeamforming(improved selectivity) rather than multiplexing.
Frequency-division multiplexing(FDM) is inherently an analog technology. FDM achieves the combining of several signals into one medium by sending signals in several distinct frequency ranges over a single medium.In FDM the signals are electrical signals.One of the most common applications for FDM is traditional radio and television broadcasting from terrestrial, mobile or satellite stations, or cable television. Only one cable reaches a customer's residential area, but the service provider can send multiple television channels or signals simultaneously over that cable to all subscribers without interference. Receivers must tune to the appropriate frequency (channel) to access the desired signal.[3]
A variant technology, calledwavelength-division multiplexing(WDM) is used inoptical communications.
Time-division multiplexing(TDM) is a digital (or in rare cases, analog) technology that uses time, instead of space or frequency, to separate the different data streams. TDM involves sequencing groups of a few bits or bytes from each individual input stream, one after the other, and in such a way that they can be associated with the appropriate receiver. If done sufficiently quickly, the receiving devices will not detect that some of the circuit time was used to serve another logical communication path.
Consider an application requiring four terminals at an airport to reach a central computer. Each terminal communicated at 2400baud, so rather than acquire four individual circuits to carry such a low-speed transmission, the airline has installed a pair of multiplexers. A pair of 9600 baud modems and one dedicated analog communications circuit from the airport ticket desk back to the airline data center are also installed.[3]Someweb proxy servers(e.g.polipo) use TDM inHTTP pipeliningof multipleHTTPtransactions onto the sameTCP/IP connection.[4]
Carrier-sense multiple accessandmultidropcommunication methods are similar to time-division multiplexing in that multiple data streams are separated by time on the same medium, but because the signals have separate origins instead of being combined into a single signal, are best viewed aschannel access methods, rather than a form of multiplexing.
TD is a legacy multiplexing technology still providing the backbone of most National fixed-line telephony networks in Europe, providing the 2 Mbit/s voice and signaling ports on narrow-band telephone exchanges such as the DMS100. Each E1 or 2 Mbit/s TDM port provides either 30 or 31 speech timeslots in the case of CCITT7 signaling systems and 30 voice channels for customer-connected Q931, DASS2, DPNSS, V5 and CASS signaling systems.[5]
Polarization-division multiplexinguses thepolarizationof electromagnetic radiation to separate orthogonal channels. It is in practical use in both radio and optical communications, particularly in 100 Gbit/s per channelfiber-optic transmission systems.
Differential Cross-Polarized Wireless Communications is a novel method for polarized antenna transmission utilizing a differential technique.[6]
Orbital angular momentum multiplexingis a relatively new and experimental technique for multiplexing multiple channels of signals carried using electromagnetic radiation over a single path.[7]It can potentially be used in addition to other physical multiplexing methods to greatly expand the transmission capacity of such systems. As of 2012[update]it is still in its early research phase, with small-scale laboratory demonstrations of bandwidths of up to 2.5 Tbit/s over a single light path.[8]This is a controversial subject in the academic community, with many claiming it is not a new method of multiplexing, but rather a special case of space-division multiplexing.[9]
Code-division multiplexing(CDM),code-division multiple access(CDMA) orspread spectrumis a class of techniques where several channels simultaneously share the samefrequency spectrum, and this spectral bandwidth is much higher than the bit rate orsymbol rate. One form is frequency hopping, another is direct sequence spread spectrum. In the latter case, each channel transmits its bits as a coded channel-specific sequence of pulses called chips. Number of chips per bit, or chips per symbol, is thespreading factor. This coded transmission typically is accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber or radio channel or other medium, and asynchronously demultiplexed. Advantages over conventional techniques are that variable bandwidth is possible (just as instatistical multiplexing), that the wide bandwidth allows poor signal-to-noise ratio according toShannon–Hartley theorem, and that multi-path propagation in wireless communication can be combated byrake receivers.
A significant application of CDMA is theGlobal Positioning System(GPS).
A multiplexing technique may be further extended into amultiple access methodorchannel access method, for example, TDM intotime-division multiple access(TDMA) and statistical multiplexing intocarrier-sense multiple access(CSMA). A multiple-access method makes it possible for several transmitters connected to the same physical medium to share their capacity.
Multiplexing is provided by thephysical layerof theOSI model, while multiple access also involves amedia access controlprotocol, which is part of thedata link layer.
The Transport layer in the OSI model, as well as TCP/IP model, provides statistical multiplexing of several application layer data flows to/from the same computer.
Code-division multiplexing(CDM) is a technique in which each channel transmits its bits as a coded channel-specific sequence of pulses. This coded transmission is typically accomplished by transmitting a unique time-dependent series of short pulses, which are placed within chip times within the larger bit time. All channels, each with a different code, can be transmitted on the same fiber and asynchronously demultiplexed. Other widely used multiple access techniques aretime-division multiple access(TDMA) andfrequency-division multiple access(FDMA).
Code-division multiplex techniques are used as an access technology, namely code-division multiple access (CDMA), in Universal Mobile Telecommunications System (UMTS) standard for the third-generation (3G) mobile communication identified by the ITU.[citation needed]
The earliest communication technology using electrical wires, and therefore sharing an interest in the economies afforded by multiplexing, was theelectric telegraph. Early experiments allowed two separate messages to travel in opposite directions simultaneously, first using an electric battery at both ends, then at only one end.
Émile Baudotdeveloped atime-multiplexingsystem of multipleHughesmachines in the 1870s. In 1874, thequadruplex telegraphdeveloped byThomas Edisontransmitted two messages in each direction simultaneously, for a total of four messages transiting the same wire at the same time. Several researchers were investigatingacoustic telegraphy, afrequency-division multiplexingtechnique, which led to theinvention of the telephone.
Intelephony, acustomer'stelephone linenow typically ends at theremote concentratorbox, where it is multiplexed along with othertelephone linesfor thatneighborhoodor other similar area. The multiplexed signal is then carried to thecentral switching officeon significantly fewer wires and for much further distances than a customer's line can practically go. This is likewise also true fordigital subscriber lines(DSL).
Fiber in the loop(FITL) is a common method of multiplexing, which usesoptical fiberas thebackbone. It not only connectsPOTSphone lines with the rest of thePSTN, but also replaces DSL by connecting directly toEthernetwired into thehome.Asynchronous Transfer Modeis often thecommunications protocolused.[citation needed]
Cable TVhas long carried multiplexedtelevision channels, and late in the 20th century began offering the same services astelephone companies.IPTValso depends on multiplexing.
Invideoediting and processing systems, multiplexing refers to the process of interleaving audio and video into one coherent data stream.
Indigital video, such a transport stream is normally a feature of acontainer formatwhich may includemetadataand other information, such assubtitles. The audio and video streams may have variable bit rate. Software that produces such a transport stream and/or container is commonly called a multiplexer ormuxer. Ademuxeris software that extracts or otherwise makes available for separate processing the components of such a stream or container.
Indigital televisionsystems, several variable bit-rate data streams are multiplexed together to a fixed bit-rate transport stream by means ofstatistical multiplexing. This makes it possible to transfer several video and audio channels simultaneously over the same frequency channel, together with various services. This may involve severalstandard-definition television(SDTV) programs (particularly onDVB-T,DVB-S2,ISDBand ATSC-C), or oneHDTV, possibly with a single SDTV companion channel over one 6 to 8 MHz-wide TV channel. The device that accomplishes this is called astatistical multiplexer. In several of these systems, the multiplexing results in anMPEG transport stream. The newer DVB standards DVB-S2 andDVB-T2has the capacity to carry severalHDTVchannels in one multiplex.[citation needed]
Indigital radio, a multiplex (also known as an ensemble) is a number of radio stations that are grouped together. A multiplex is a stream of digital information that includes audio and other data.[10]
Oncommunications satelliteswhich carrybroadcasttelevision networksandradio networks, this is known asmultiple channel per carrierorMCPC. Where multiplexing is not practical (such as where there are different sources using a singletransponder),single channel per carriermode is used.[citation needed]
InFM broadcastingand otheranalogradiomedia, multiplexing is a term commonly given to the process of addingsubcarriersto the audio signal before it enters thetransmitter, wheremodulationoccurs. (In fact, the stereo multiplex signal can be generated using time-division multiplexing, by switching between the two (left channel and right channel) input signals at an ultrasonic rate (the subcarrier), and then filtering out the higher harmonics.) Multiplexing in this sense is sometimes known asMPX, which in turn is also an old term forstereophonicFM, seen onstereo systemssince the 1960s.
Inspectroscopythe term is used to indicate that the experiment is performed with a mixture of frequencies at once and their respective response unraveled afterward using theFourier transformprinciple.
Incomputer programming, it may refer to using a single in-memory resource (such as a file handle) to handle multiple external resources (such as on-disk files).[11]
Some electrical multiplexing techniques do not require a physical "multiplexer" device, they refer to a "keyboard matrix" or "Charlieplexing" design style:
In high-throughputDNA sequencing, the term is used to indicate that some artificial sequences (often calledbarcodesorindexes) have been added to link given sequence reads to a given sample, and thus allow for the sequencing of multiple samples in the same reaction.
Insociolinguistics, multiplexity is used to describe the number of distinct connections between individuals who are part of asocial network. A multiplex network is one in which members share a number of ties stemming from more than one social context, such as workmates, neighbors, or relatives.
|
https://en.wikipedia.org/wiki/Multiplexing
|
WURFL(WirelessUniversalResourceFiLe) is a set of proprietaryapplication programming interfaces(APIs) and anXMLconfiguration filewhich contains information about device capabilities and features for a variety of mobile devices, focused on mobile device detection.[1][2]Until version 2.2, WURFL was released under an "open source / public domain" license.[3]Prior to version 2.2, device information was contributed by developers around the world and the WURFL was updated frequently, reflecting new wireless devices coming on the market. In June 2011, the founder of the WURFL project,Luca Passani, and Steve Kamerman, the author of Tera-WURFL, a popular PHP WURFL API, formed ScientiaMobile, Inc to provide commercial mobile device detection support and services using WURFL.[4]As of August 30, 2011, the ScientiaMobile WURFL APIs are licensed under adual-licensemodel, using theAGPLlicense for non-commercial use and a proprietary commercial license. The current version of the WURFL database itself is no longer open source.
There have been several approaches to this problem, including developing very primitive content and hoping it works on a variety of devices, limiting support to a small subset of devices or bypassing the browser solution altogether and developing aJava MEorBREWclient application.
WURFL solves this by allowing development of content pages using abstractions of page elements (buttons, links and textboxes for example). At run time, these are converted to the appropriate, specific markup types for each device. In addition, the developer can specify other content decisions be made at runtime based on device specific capabilities and features (which are all in the WURFL).
In March 2012, ScientiaMobile has announced the launch of the WURFL Cloud.[5]While the WURFL Cloud is a paid service, a free offer is made available to hobbyists and micro-companies for use on mobile sites with limited traffic.[6]Currently, the WURFL Cloud supports Java, Microsoft .NET, PHP, Ruby, Python,Node.jsand the Perl programming languages[7][8]
In October 2012, ScientiaMobile has announced the availability of aC++API, anApachemodule, anNGINXmodule andVarnish Cachemodule.[9]Later in November 2016, ScientiaMobile provided a module forthe HAProxy load balancer.[10]Differently from other WURFL APIs, the C++ API and the modules are distributed commercially exclusively. Several popularLinux distributionare supported throughRPMandDEBpackages.[11]
In 2014, WURFL.io was launched. WURFL.io features non-commercial products and services from ScientiaMobile:
WALL (Wireless Abstraction Library by Luca Passani) is aJSPtag librarythat lets a developer author mobile pages similar to plain HTML, while
deliveringWML,C-HTMLandXHTML Mobile Profileto the device from which theHTTP requestoriginates, depending on the actual capabilities of the device itself.[14]Device capabilities are queried dynamically using the WURFL API. A WALL port to PHP (called WALL4PHP) is also available.
WURFL is currently supported using the following.
The PHP/MySQL based Tera-WURFL API comes with a remote webservice that allows you to query the WURFL from any language that supports XML webservices[15]and includes clients for the following languages out of the box:
The August 29, 2011 update of WURFL included a new set of licensing terms. These terms set forth a number of licenses under which WURFL could be used. The free version of the license does not allow derivative works, and prevents direct access to the wurfl.xml file. As a result of the "no-derivates" clause, users are no longer permitted to add new device capabilities to the WURFL file either directly or through the submissions of "patches". A commercial license is required to utilize third-party API's with the WURFL Repository.
On January 3, 2012, ScientiaMobile filed aDMCAtakedown notice against the open-source device database OpenDDR that contains data from a previous version of WURFL. According to OpenDDR, these data were available under GPL.[16]
On March 22, 2012 it was announced by Matthew Weier O'Phinney thatZend Frameworkwould be dropping support for WURFL as of version 1.12.[17]This was due to the licence change which makes it incompatible with theZend Framework'slicensing[18]as the new licensing now requires that you "open-source the full source code of your web site, irrespective of the fact that you may modify the WURFL API or not."[19]
|
https://en.wikipedia.org/wiki/WURFL
|
Atranslation memory(TM) is a database that stores "segments", which can be sentences, paragraphs or sentence-like units (headings, titles or elements in a list) that have previously been translated, in order to aid humantranslators. The translation memory stores thesource textand its corresponding translation in language pairs called "translation units". Individual words are handled by terminology bases and are not within the domain of TM.
Software programs that use translation memories are sometimes known astranslation memory managers(TMM) ortranslation memory systems(TM systems, not to be confused with atranslation management system(TMS), which is another type of software focused on managing the process of translation).
Translation memories are typically used in conjunction with a dedicatedcomputer-assisted translation(CAT) tool,word processingprogram,terminology management systems, multilingual dictionary, or even rawmachine translationoutput.
Research indicates that manycompanies producing multilingual documentationare using translation memory systems. In a survey of language professionals in 2006, 82.5% out of 874 replies confirmed the use of a TM.[1]Usage of TM correlated with text type characterised by technical terms and simple sentence structure (technical, to a lesser degree marketing and financial), computing skills, and repetitiveness of content.[1]
The program breaks thesource text(the text to be translated) into segments, looks for matches between segments and the source half of previously translated source-target pairs stored in atranslation memory, and presents such matching pairs as translation full and partialmatches. The translator can accept a match, replace it with a fresh translation, or modify it to match the source. In the last two cases, the new or modified translation goes into the database.
Some translation memory systems search for 100% matches only, i.e. they can only retrieve segments of text that match entries in the database exactly, while others employfuzzy matchingalgorithms to retrieve similar segments, which are presented to the translator with differences flagged. Typical translation memory systems only search for text in the source segment.
The flexibility and robustness of the matching algorithm largely determine the performance of the translation memory, although for some applications the recall rate of exact matches can be high enough to justify the 100%-match approach.
Segments where no match is found will have to be translated by the translator manually. These newly translated segments are stored in the database where they can be used for future translations as well as repetitions of that segment in the current text.
Translation memories work best on texts which are highly repetitive, such as technical manuals. They are also helpful for translating incremental changes in a previously translated document, corresponding, for example, to minor changes in a new version of a user manual. Traditionally, translation memories have not been considered appropriate for literary or creative texts, for the simple reason that there is so little repetition in the language used. However, others find them of value even for non-repetitive texts, because the database resources created have value for concordance searches to determine appropriate usage of terms, for quality assurance (no empty segments), and the simplification of the review process (source and target segment are always displayed together while translators have to work with two documents in a traditional review environment).
Translation memory managers are most suitable for translating technical documentation and documents containing specialized vocabularies. Their benefits include:
The main problems hindering wider use of translation memory managers include:
The use of TM systems might have an effect on the quality of the texts translated. Its main effect is clearly related to the so-called "error propagation": if the translation for a particular segment is incorrect, it is in fact more likely that the incorrect translation will be reused the next time the same source text, or a similar source text, is translated, thereby perpetuating the error. Traditionally, two main effects on the quality of translated texts have been described: the "sentence-salad" effect (Bédard 2000; cited in O'Hagan 2009: 50) and the "peep-hole" effect (Heyn 1998). The first refers to a lack of coherence at the text level when a text is translated using sentences from a TM which have been translated by different translators with different styles. According to the latter, translators may adapt their style to the use of TM system in order for these not to contain intratextual references, so that the segments can be better reused in future texts, thus affecting cohesion and readability (O'Hagan 2009).
There is a potential, and, if present, probably an unconscious effect on the translated text. Different languages use different sequences for the logical elements within a sentence and a translator presented with a multiple clause sentence that is half translated is less likely to completely rebuild a sentence. Consistent empirical evidences (Martín-Mor 2011) show that translators will most likely modify the structure of a multiple clause sentence when working with a text processor rather than with a TM system.
There is also a potential for the translator to deal with the text mechanically sentence-by-sentence, instead of focusing on how each sentence relates to those around it and to the text as a whole. Researchers (Dragsted 2004) have identified this effect, which relates to the automatic segmentation feature of these programs, but it does not necessarily have a negative effect on the quality of translations.
These effects are closely related to training rather than inherent to the tool. According to Martín-Mor (2011), the use of TM systems does have an effect on the quality of the translated texts, especially on novices, but experienced translators are able to avoid it. Pym (2013) reminds that "translators using TM/MT tend to revise each segment as they go along, allowing little time for a final revision of the whole text at the end", which might be the ultimate cause of some of the effects described here.
The following is a summary of the main functions of a translation memory.
This function is used to transfer a text and its translation from a text file to the TM.Importcan be done from araw format, in which an externalsource textis available for importing into a TM along with its translation. Sometimes the texts have to be reprocessed by the user. There is another format that can be used to import: thenative format. This format is the one that uses the TM to save translation memories in a file.
The process of analysis involves the following steps:
Export transfers the text from the TM into an external text file. Import and export should be inverses.
When translating, one of the main purposes of the TM is to retrieve the most useful matches in the memory so that the translator can choose the best one. The TM must show both the source and target text pointing out the identities and differences.
Several different types of matches can be retrieved from a TM.
A TM is updated with a new translation when it has been accepted by the translator. As always in updating a database, there is the question what to do with the previous contents of the database. A TM can be modified by changing or deleting entries in the TM. Some systems allow translators to save multiple translations of the same source segment.
Translation memory tools often provide automatic retrieval and substitution.
Networking enables a group of translators to translate a text together faster than if each was working in isolation, because sentences and phrases translated by one translator are available to the others. Moreover, if translation memories are shared before the final translation, there is an opportunity for mistakes by one translator to be corrected by other team members.
"Text memory" is the basis of the proposed Lisa OSCAR xml:tm standard. Text memory comprises author memory and translation memory.
The unique identifiers are remembered during translation so that the target language document is 'exactly' aligned at the text unit level. If the source document is subsequently modified, then those text units that have not changed can be directly transferred to the new target version of the document without the need for any translator interaction. This is the concept of 'exact' or 'perfect' matching to the translation memory. xml:tm can also provide mechanisms for in-document leveraged and fuzzy matching.
1970s is the infancy stage for TM systems in which scholars carried on a preliminary round of exploratory discussions. The original idea for TM systems is often attributed[according to whom?]to Martin Kay's "Proper Place" paper,[2]but the details of it are not fully given. In this paper, it has shown the basic concept of the storing system: "The translator might start by issuing a command causing the system to display anything in the store that might be relevant to .... Before going on, he can examine past and future fragments of text that contain similar material". This observation from Kay was actually influenced by the suggestion of Peter Arthern that translators can use similar, already translated documents online. In his 1978 article[3]he gave fully demonstration of what we call TM systems today: Any new text would be typed into a word processing station, and as it was being typed, the system would check this text against the earlier texts stored in its memory, together with its translation into all the other official languages [of the European Community]. ... One advantage over machine translation proper would be that all the passages so retrieved would be grammatically correct. In effect, we should be operating an electronic 'cut and stick' process which would, according to my calculations, save at least 15 per cent of the time which translators now employ in effectively producing translations.
The idea was incorporated from ALPS (Automated Language Processing Systems) Tools first developed by researcher from Brigham Young University, and at that time the idea of TM systems was mixed with a tool called "Repetitions Processing" which only aimed to find matched strings. Only after a long time, did the concept of so-called translation memory come into being.
The real exploratory stage of TM systems would be 1980s. One of the first implementations of TM system appeared in Sadler and Vendelmans' Bilingual Knowledge Bank. A Bilingual Knowledge Bank is a syntactically and referentially structured pair of corpora, one being a translation of the other, in which translation units are cross-coded between the corpora. The aim of Bilingual Knowledge Bank is to develop a corpus-based general-purpose knowledge source for applications in machine translation and computer-aided translation (Sadler & Vendelman, 1987). Another important step was made by Brian Harris with his "Bi-text". He has defined the bi-text as "a single text in two dimensions" (1988), the source and target texts related by the activity of the translator through translation units which made a similar echoes with Sadler's Bilingual Knowledge Bank. And in Harris's work he proposed something like TM system without using this name: a database of paired translations, searchable either by individual word, or by "whole translation unit", in the latter case the search being allowed to retrieve similar rather than identical units.
TM technology only became commercially available on a wide scale in the late 1990s, through the efforts made by several engineers and translators. Of note is the first TM tool called Trados (SDL Tradosnowadays). In this tool, when opening the source file and applying the translation memory so that any "100% matches" (identical matches) or "fuzzy matches" (similar, but not identical matches) within the text are instantly extracted and placed within the target file. Then, the "matches" suggested by the translation memory can be either accepted or overridden with new alternatives. If a translation unit is manually updated, then it is stored within the translation memory for future use as well as for repetition in the current text. In a similar way, all segments in the target file without a "match" would be translated manually and then automatically added to the translation memory.
In the 2000s, online translation services began incorporating TM. Machine translation services likeGoogle Translate, as well as professional and "hybrid" translation services provided by sites likeGengoandAckuna, incorporate databases of TM data supplied by translators and volunteers to make more efficient connections between languages provide faster translation services to end-users.[4]
One recent development is the concept of 'text memory' in contrast to translation memory.[5]This is also the basis of the proposed LISA OSCAR standard.[6]Text memory within xml:tm comprises 'author memory' and 'translation memory'. Author memory is used to keep track of changes during the authoring cycle. Translation memory uses the information from author memory to implement translation memory matching. Although primarily targeted at XML documents, xml:tm can be used on any document that can be converted to XLIFF[7]format.
Much more powerful than first-generation TM systems, they include alinguistic analysisengine, use chunk technology to break down segments into intelligent terminological groups, and automatically generate specific glossaries.
Translation Memory eXchange(TMX) is a standard that enables the interchange of translation memories between translation suppliers. TMX has been adopted by the translation community as the best way of importing and exporting translation memories[citation needed]. The current version is 1.4b - it allows for the recreation of the original source and target documents from the TMX data.
TermBase eXchange. ThisLISAstandard, which was revised and republished as ISO 30042, allows for the interchange of terminology data including detailed lexical information. The framework for TBX is provided by three ISO standards:ISO 12620, ISO 12200 and ISO 16642.ISO 12620provides an inventory of well-defined "data categories" with standardized names that function as data element types or as predefined values. ISO 12200 (also known as MARTIF) provides the basis for the core structure of TBX. ISO 16642 (also known as Terminological Markup Framework) includes a structural meta-model for Terminology Markup Languages in general.
Universal Terminology eXchange(UTX) format is a standard specifically designed to be used for user dictionaries ofmachine translation, but it can be used for general, human-readable glossaries. The purpose of UTX is to accelerate dictionary sharing and reuse by its extremely simple and practical specification.
Segmentation Rules eXchange(SRX) is intended to enhance the TMX standard so that translation memory data that is exchanged between applications can be used more effectively. The ability to specify the segmentation rules that were used in the previous translation may increase the leveraging that can be achieved.
GILT Metrics. GILT stands for (Globalization, Internationalization, Localization, and Translation). The GILT Metrics standard comprises three parts: GMX-V for volume metrics, GMX-C for complexity metrics and GMX-Q for quality metrics. The proposed GILT Metrics standard is tasked with quantifying the workload and quality requirements for any given GILT task.
Open Lexicon Interchange Format. OLIF is an open, XML-compliant standard for the exchange of terminological and lexical data. Although originally intended as a means for the exchange of lexical data between proprietary machine translation lexicons, it has evolved into a more general standard for terminology exchange.[8]
XML Localisation Interchange File Format(XLIFF) is intended to provide a single interchange file format that can be understood by any localization provider.XLIFFis the preferred way[9][10]of exchanging data in XML format in the translation industry.[11]
Translation Web Services. TransWS specifies the calls needed to use Web services for the submission and retrieval of files and messages relating to localization projects. It is intended as a detailed framework for the automation of much of the current localization process by the use of Web Services.[12]
The xml:tm (XML-based Text Memory) approach to translation memory is based on the concept of text memory which comprises author and translation memory.[13]xml:tm has been donated to Lisa OSCAR by XML-INTL.
Gettext Portable Object format. Though often not regarded as a translation memory format, Gettext PO files are bilingual files that are also used in translation memory processes in the same way translation memories are used. Typically, a PO translation memory system will consist of various separate files in a directory tree structure. Common tools that work with PO files include theGNU GettextTools and theTranslate Toolkit. Several tools and programs also exist that edit PO files as if they are meresource textfiles.
|
https://en.wikipedia.org/wiki/Translation_memory
|
Instatistics,homogeneityand its opposite,heterogeneity, arise in describing the properties of adataset, or several datasets. They relate to the validity of the often convenient assumption that the statistical properties of any one part of an overall dataset are the same as any other part. Inmeta-analysis, which combines the data from several studies, homogeneity measures the differences or similarities between the several studies (see alsoStudy heterogeneity).
Homogeneity can be studied to several degrees of complexity. For example, considerations ofhomoscedasticityexamine how much thevariabilityof data-values changes throughout a dataset. However, questions of homogeneity apply to all aspects of thestatistical distributions, including thelocation parameter. Thus, a more detailed study would examine changes to the whole of themarginal distribution. An intermediate-level study might move from looking at the variability to studying changes in theskewness. In addition to these, questions of homogeneity apply also to thejoint distributions.
The concept of homogeneity can be applied in many different ways and, for certain types of statistical analysis, it is used to look for further properties that might need to be treated as varying within a dataset once some initial types of non-homogeneity have been dealt with.
Instatistics, asequenceofrandom variablesishomoscedastic(/ˌhoʊmoʊskəˈdæstɪk/) if all its random variables have the same finitevariance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellingshomoskedasticityandheteroskedasticityare also frequently used. “Skedasticity” comes from the Ancient Greek word “skedánnymi”, meaning “to scatter”.[1][2][3]Assuming a variable is homoscedastic when in reality it is heteroscedastic (/ˌhɛtəroʊskəˈdæstɪk/) results inunbiasedbutinefficientpoint estimatesand in biased estimates ofstandard errors, and may result in overestimating thegoodness of fitas measured by thePearson coefficient.
The existence of heteroscedasticity is a major concern inregression analysisand theanalysis of variance, as it invalidatesstatistical tests of significancethat assume that themodelling errorsall have the same variance. While theordinary least squaresestimator is still unbiased in the presence of heteroscedasticity, it is inefficient and inference based on the assumption of homoskedasticity is misleading. In that case,generalized least squares(GLS) was frequently used in the past.[4][5]Nowadays, standard practice in econometrics is to includeHeteroskedasticity-consistent standard errorsinstead of using GLS, as GLS can exhibit strong bias in small samples if the actualskedastic functionis unknown.[6]
Because heteroscedasticity concernsexpectationsof the secondmomentof the errors, its presence is referred to asmisspecificationof the second order.[7]
Differences in the typical values across the dataset might initially be dealt with by constructing a regression model using certain explanatory variables to relate variations in the typical value to known quantities. There should then be a later stage of analysis to examine whether the errors in the predictions from the regression behave in the same way across the dataset. Thus the question becomes one of the homogeneity of the distribution of the residuals, as the explanatory variables change. Seeregression analysis.
The initial stages in the analysis of a time series may involve plotting values against time to examine homogeneity of the series in various ways: stability across time as opposed to a trend; stability of local fluctuations over time.
Inhydrology, data-series across a number of sites composed of annual values of the within-year annual maximum river-flow are analysed. A common model is that the distributions of these values are the same for all sites apart from a simple scaling factor, so that the location and scale are linked in a simple way. There can then be questions of examining the homogeneity across sites of the distribution of the scaled values.
Inmeteorology, weather datasets are acquired over many years of record and, as part of this, measurements at certain stations may cease occasionally while, at around the same time, measurements may start at nearby locations. There are then questions as to whether, if the records are combined to form a single longer set of records, those records can be considered homogeneous over time. An example of homogeneity testing of wind speed and direction data can be found in Romanićet al., 2015.[9]
Simple populations surveys may start from the idea that responses will be homogeneous across the whole of a population. Assessing the homogeneity of the population would involve looking to see whether the responses of certain identifiablesubpopulationsdiffer from those of others. For example, car-owners may differ from non-car-owners, or there may be differences between different age-groups.
A test for homogeneity, in the sense of exact equivalence of statistical distributions, can be based on anE-statistic. Alocation testtests the simpler hypothesis that distributions have the samelocation parameter.
|
https://en.wikipedia.org/wiki/Homogeneity_(statistics)
|
Concept mappingandmind mappingsoftware is used to create diagrams of relationships between concepts, ideas, or other pieces of information. It has been suggested that the mind mapping technique can improve learning and study efficiency up to 15% over conventionalnote-taking.[1]Many software packages and websites allow creating or otherwise supporting mind maps.
Using a standard file format allows interchange of files between various programs. Many programs listed below support theOPMLfile format and theXMLfile format used byFreeMind.[citation needed]
The following tools comply with theFree Software Foundation's (FSF) definition offree software. As such, they are alsoopen-source software.
The following is a list of notable concept mapping and mind mappingapplicationswhich areproprietary software(albeit perhaps available at no cost, seefreeware).
|
https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapping_software
|
TheAkaike information criterion(AIC) is anestimatorofprediction errorand thereby relative quality ofstatistical modelsfor a given set of data.[1][2][3]Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means formodel selection.
AIC is founded oninformation theory. When a statistical model is used to represent the process that generated the data, the representation will almost never be exact; so some information will be lost by using the model to represent the process. AIC estimates the relative amount of information lost by a given model: the less information a model loses, the higher the quality of that model.
In estimating the amount of information lost by a model, AIC deals with the trade-off between thegoodness of fitof the model and the simplicity of the model. In other words, AIC deals with both the risk ofoverfittingand the risk of underfitting.
The Akaike information criterion is named after the Japanese statisticianHirotugu Akaike, who formulated it. It now forms the basis of a paradigm for thefoundations of statisticsand is also widely used forstatistical inference.
Suppose that we have astatistical modelof some data. Letkbe the number of estimatedparametersin the model. LetL^{\displaystyle {\hat {L}}}be the maximized value of thelikelihood functionfor the model. Then the AIC value of the model is the following.[4][5]
Given a set of candidate models for the data, the preferred model is the one with the minimum AIC value. Thus, AIC rewardsgoodness of fit(as assessed by the likelihood function), but it also includes a penalty that is an increasing function of the number of estimated parameters. The penalty discouragesoverfitting, which is desired because increasing the number of parameters in the model almost always improves the goodness of the fit.
Suppose that the data is generated by some unknown processf. We consider two candidate models to representf:g1andg2. If we knewf, then we could find the information lost from usingg1to representfby calculating theKullback–Leibler divergence,DKL(f‖g1); similarly, the information lost from usingg2to representfcould be found by calculatingDKL(f‖g2). We would then, generally, choose the candidate model that minimized the information loss.
We cannot choose with certainty, because we do not knowf.Akaike (1974)showed, however, that we can estimate, via AIC, how much more (or less) information is lost byg1than byg2. The estimate, though, is only validasymptotically; if the number of data points is small, then some correction is often necessary (seeAICc, below).
Note that AIC tells nothing about the absolute quality of a model, only the quality relative to other models. Thus, if all the candidate models fit poorly, AIC will not give any warning of that. Hence, after selecting a model via AIC, it is usually good practice to validate the absolute quality of the model. Such validation commonly includes checks of the model'sresiduals(to determine whether the residuals seem like random) and tests of the model's predictions. For more on this topic, seestatistical model validation.
To apply AIC in practice, we start with a set of candidate models, and then find the models' corresponding AIC values. There will almost always be information lost due to using a candidate model to represent the "true model," i.e. the process that generated the data. We wish to select, from among the candidate models, the model that minimizes the information loss. We cannot choose with certainty, but we can minimize the estimated information loss.
Suppose that there areRcandidate models. Denote the AIC values of those models by AIC1, AIC2, AIC3, ..., AICR. Let AICminbe the minimum of those values. Then the quantity exp((AICmin− AICi)/2) can be interpreted as being proportional to the probability that theith model minimizes the (estimated) information loss.[6]
As an example, suppose that there are three candidate models, whose AIC values are 100, 102, and 110. Then the second model isexp((100 − 102)/2) = 0.368times as probable as the first model to minimize the information loss. Similarly, the third model isexp((100 − 110)/2) = 0.007times as probable as the first model to minimize the information loss.
In this example, we would omit the third model from further consideration. We then have three options: (1) gather more data, in the hope that this will allow clearly distinguishing between the first two models; (2) simply conclude that the data is insufficient to support selecting one model from among the first two; (3) take a weighted average of the first two models, with weights proportional to 1 and 0.368, respectively, and then dostatistical inferencebased on the weightedmultimodel.[7]
The quantityexp((AICmin− AICi)/2)is known as therelative likelihoodof modeli. It is closely related to the likelihood ratio used in thelikelihood-ratio test. Indeed, if all the models in the candidate set have the same number of parameters, then using AIC might at first appear to be very similar to using the likelihood-ratio test. There are, however, important distinctions. In particular, the likelihood-ratio test is valid only fornested models, whereas AIC (and AICc) has no such restriction.[8][9]
Everystatistical hypothesis testcan be formulated as a comparison of statistical models. Hence, every statistical hypothesis test can be replicated via AIC. Two examples are briefly described in the subsections below. Details for those examples, and many more examples, are given bySakamoto, Ishiguro & Kitagawa (1986, Part II) andKonishi & Kitagawa (2008, ch. 4).
As an example of a hypothesis test, consider thet-testto compare the means of twonormally-distributedpopulations. The input to thet-test comprises a random sample from each of the two populations.
To formulate the test as a comparison of models, we construct two different models. The first model models the two populations as having potentially different means and standard deviations. The likelihood function for the first model is thus the product of the likelihoods for two distinct normal distributions; so it has four parameters:μ1,σ1,μ2,σ2. To be explicit, thelikelihood functionis as follows (denoting the sample sizes byn1andn2).
The second model models the two populations as having the same means and the same standard deviations. The likelihood function for the second model thus setsμ1=μ2andσ1=σ2in the above equation; so it only has two parameters.
We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different means.
Thet-test assumes that the two populations have identical standard deviations; the test tends to be unreliable if the assumption is false and the sizes of the two samples are very different (Welch'st-testwould be better). Comparing the means of the populations via AIC, as in the example above, has the same disadvantage. However, one could create a third model that allows different standard deviations. This third model would have the advantage of not making such assumptions at the cost of an additional parameter and thus degree of freedom.
For another example of a hypothesis test, suppose that we have two populations, and each member of each population is in one of twocategories—category #1 or category #2. Each population isbinomially distributed. We want to know whether the distributions of the two populations are the same. We are given a random sample from each of the two populations.
Letmbe the size of the sample from the first population. Letm1be the number of observations (in the sample) in category #1; so the number of observations in category #2 ism−m1. Similarly, letnbe the size of the sample from the second population. Letn1be the number of observations (in the sample) in category #1.
Letpbe the probability that a randomly-chosen member of the first population is in category #1. Hence, the probability that a randomly-chosen member of the first population is in category #2 is1 −p. Note that the distribution of the first population has one parameter. Letqbe the probability that a randomly-chosen member of the second population is in category #1. Note that the distribution of the second population also has one parameter.
To compare the distributions of the two populations, we construct two different models. The first model models the two populations as having potentially different distributions. The likelihood function for the first model is thus the product of the likelihoods for two distinct binomial distributions; so it has two parameters:p,q. To be explicit, the likelihood function is as follows.
The second model models the two populations as having the same distribution. The likelihood function for the second model thus setsp=qin the above equation; so the second model has one parameter.
We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different distributions.
Statistical inferenceis generally regarded as comprising hypothesis testing andestimation. Hypothesis testing can be done via AIC, as discussed above. Regarding estimation, there are two types:point estimationandinterval estimation. Point estimation can be done within the AIC paradigm: it is provided bymaximum likelihood estimation. Interval estimation can also be done within the AIC paradigm: it is provided bylikelihood intervals. Hence, statistical inference generally can be done within the AIC paradigm.
The most commonly used paradigms for statistical inference arefrequentist inferenceandBayesian inference. AIC, though, can be used to do statistical inference without relying on either the frequentist paradigm or the Bayesian paradigm: because AIC can be interpreted without the aid ofsignificance levelsorBayesian priors.[10]In other words, AIC can be used to form afoundation of statisticsthat is distinct from both frequentism and Bayesianism.[11][12]
When thesamplesize is small, there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC will overfit.[13][14][15]To address such potential overfitting, AICc was developed: AICc is AIC with a correction for small sample sizes.
The formula for AICc depends upon the statistical model. Assuming that the model isunivariate, is linear in its parameters, and has normally-distributedresiduals(conditional upon regressors), then the formula for AICc is as follows.[16][17][18][19]
—wherendenotes the sample size andkdenotes the number of parameters. Thus, AICc is essentially AIC with an extra penalty term for the number of parameters. Note that asn→ ∞, the extra penalty term converges to 0, and thus AICc converges to AIC.[20]
If the assumption that the model is univariate and linear with normal residuals does not hold, then the formula for AICc will generally be different from the formula above. For some models, the formula can be difficult to determine. For every model that has AICc available, though, the formula for AICc is given by AIC plus terms that includes bothkandk2. In comparison, the formula for AIC includeskbut notk2. In other words, AIC is afirst-order estimate(of the information loss), whereas AICc is asecond-order estimate.[21]
Further discussion of the formula, with examples of other assumptions, is given byBurnham & Anderson (2002, ch. 7) and byKonishi & Kitagawa (2008, ch. 7–8). In particular, with other assumptions,bootstrap estimationof the formula is often feasible.
To summarize, AICc has the advantage of tending to be more accurate than AIC (especially for small samples), but AICc also has the disadvantage of sometimes being much more difficult to compute than AIC. Note that if all the candidate models have the samekand the same formula for AICc, then AICc and AIC will give identical (relative) valuations; hence, there will be no disadvantage in using AIC, instead of AICc. Furthermore, ifnis many times larger thank2, then the extra penalty term will be negligible; hence, the disadvantage in using AIC, instead of AICc, will be negligible.
The Akaike information criterion was formulated by the statisticianHirotugu Akaike. It was originally named "an information criterion".[22]It was first announced in English by Akaike at a 1971 symposium; the proceedings of the symposium were published in 1973.[22][23]The 1973 publication, though, was only an informal presentation of the concepts.[24]The first formal publication was a 1974 paper by Akaike.[5]
The initial derivation of AIC relied upon some strong assumptions.Takeuchi (1976)showed that the assumptions could be made much weaker. Takeuchi's work, however, was in Japanese and was not widely known outside Japan for many years. (Translated in[25])
AIC was originally proposed forlinear regression(only) bySugiura (1978). That instigated the work ofHurvich & Tsai (1989), and several further papers by the same authors, which extended the situations in which AICc could be applied.
The first general exposition of the information-theoretic approach was the volume byBurnham & Anderson (2002). It includes an English presentation of the work of Takeuchi. The volume led to far greater use of AIC, and it now has more than 64,000 citations onGoogle Scholar.
Akaike called his approach an "entropy maximization principle", because the approach is founded on the concept ofentropy in information theory. Indeed, minimizing AIC in a statistical model is effectively equivalent to maximizing entropy in a thermodynamic system; in other words, the information-theoretic approach in statistics is essentially applying thesecond law of thermodynamics. As such, AIC has roots in the work ofLudwig Boltzmannonentropy. For more on these issues, seeAkaike (1985)andBurnham & Anderson (2002, ch. 2).
Astatistical modelmust account forrandom errors. A straight line model might be formally described asyi=b0+b1xi+εi. Here, theεiare theresidualsfrom the straight line fit. If theεiare assumed to bei.i.d.Gaussian(with zero mean), then the model has three parameters:b0,b1, and the variance of the Gaussian distributions.
Thus, when calculating the AIC value of this model, we should usek=3. More generally, for anyleast squaresmodel with i.i.d. Gaussian residuals, the variance of the residuals' distributions should be counted as one of the parameters.[26]
As another example, consider a first-orderautoregressive model, defined byxi=c+φxi−1+εi, with theεibeing i.i.d. Gaussian (with zero mean). For this model, there are three parameters:c,φ, and the variance of theεi. More generally, apth-order autoregressive model hasp+ 2parameters. (If, however,cis not estimated from the data, but instead given in advance, then there are onlyp+ 1parameters.)
The AIC values of the candidate models must all be computed with the same data set. Sometimes, though, we might want to compare a model of theresponse variable,y, with a model of the logarithm of the response variable,log(y). More generally, we might want to compare a model of the data with a model oftransformed data. Following is an illustration of how to deal with data transforms (adapted fromBurnham & Anderson (2002, §2.11.3): "Investigators should be sure that all hypotheses are modeled using the same response variable").
Suppose that we want to compare two models: one with anormal distributionofyand one with a normal distribution oflog(y). We shouldnotdirectly compare the AIC values of the two models. Instead, we should transform the normalcumulative distribution functionto first take the logarithm ofy. To do that, we need to perform the relevantintegration by substitution: thus, we need to multiply by the derivative of the(natural) logarithmfunction, which is1/y. Hence, the transformed distribution has the followingprobability density function:
—which is the probability density function for thelog-normal distribution. We then compare the AIC value of the normal model against the AIC value of the log-normal model.
For misspecified model, Takeuchi's Information Criterion (TIC) might be more appropriate. However, TIC often suffers from instability caused by estimation errors.[27]
The critical difference between AIC and BIC (and their variants) is the asymptotic property under well-specified and misspecified model classes.[28]Their fundamental differences have been well-studied in regression variable selection and autoregression order selection[29]problems. In general, if the goal is prediction, AIC and leave-one-out cross-validations are preferred. If the goal is selection, inference, or interpretation, BIC or leave-many-out cross-validations are preferred. A comprehensive overview of AIC and other popular model selection methods is given by Ding et al. (2018)[30]
The formula for theBayesian information criterion(BIC) is similar to the formula for AIC, but with a different penalty for the number of parameters. With AIC the penalty is2k, whereas with BIC the penalty isln(n)k.
A comparison of AIC/AICc and BIC is given byBurnham & Anderson (2002, §6.3-6.4), with follow-up remarks byBurnham & Anderson (2004). The authors show that AIC/AICc can be derived in the same Bayesian framework as BIC, just by using differentprior probabilities. In the Bayesian derivation of BIC, though, each candidate model has a prior probability of 1/R(whereRis the number of candidate models). Additionally, the authors present a few simulation studies that suggest AICc tends to have practical/performance advantages over BIC.
A point made by several researchers is that AIC and BIC are appropriate for different tasks. In particular, BIC is argued to be appropriate for selecting the "true model" (i.e. the process that generated the data) from the set of candidate models, whereas AIC is not appropriate. To be specific, if the "true model" is in the set of candidates, then BIC will select the "true model" with probability 1, asn→ ∞; in contrast, when selection is done via AIC, the probability can be less than 1.[31][32][33]Proponents of AIC argue that this issue is negligible, because the "true model" is virtually never in the candidate set. Indeed, it is a common aphorism in statistics that "all models are wrong"; hence the "true model" (i.e. reality) cannot be in the candidate set.
Another comparison of AIC and BIC is given byVrieze (2012). Vrieze presents a simulation study—which allows the "true model" to be in the candidate set (unlike with virtually all real data). The simulation study demonstrates, in particular, that AIC sometimes selects a much better model than BIC even when the "true model" is in the candidate set. The reason is that, for finiten, BIC can have a substantial risk of selecting a very bad model from the candidate set. This reason can arise even whennis much larger thank2. With AIC, the risk of selecting a very bad model is minimized.
If the "true model" is not in the candidate set, then the most that we can hope to do is select the model that best approximates the "true model". AIC is appropriate for finding the best approximating model, under certain assumptions.[31][32][33](Those assumptions include, in particular, that the approximating is done with regard to information loss.)
Comparison of AIC and BIC in the context ofregressionis given byYang (2005). In regression, AIC is asymptotically optimal for selecting the model with the leastmean squared error, under the assumption that the "true model" is not in the candidate set. BIC is not asymptotically optimal under the assumption. Yang additionally shows that the rate at which AIC converges to the optimum is, in a certain sense, the best possible.
Sometimes, each candidate model assumes that the residuals are distributed according to independent identical normal distributions (with zero mean). That gives rise toleast squaresmodel fitting.
With least squares fitting, themaximum likelihood estimatefor the variance of a model's residuals distributions is
where theresidual sum of squaresis
Then, the maximum value of a model's log-likelihood function is (seeNormal distribution#Log-likelihood):
whereCis a constant independent of the model, and dependent only on the particular data points, i.e. it does not change if the data does not change.
That gives:[34]
Because only differences in AIC are meaningful, the constantCcan be ignored, which allows us to conveniently take the following for model comparisons:
Note that if all the models have the samek, then selecting the model with minimum AIC is equivalent to selecting the model with minimumRSS—which is the usual objective of model selection based on least squares.
Leave-one-outcross-validationis asymptotically equivalent to AIC, for ordinary linear regression models.[35]Asymptotic equivalence to AIC also holds formixed-effects models.[36]
Mallows'sCpis equivalent to AIC in the case of (Gaussian)linear regression.[37]
|
https://en.wikipedia.org/wiki/Akaike_information_criterion
|
Therepertory gridis an interviewing technique which usesnonparametricfactor analysisto determine anidiographicmeasure of personality.[1][2]It was devised byGeorge Kellyin around 1955 and is based on hispersonal construct theoryofpersonality.[3]
The repertory grid is a technique for identifying the ways that a person construes (interprets orgives meaning to) his or her experience.[4]It provides information from which inferences about personality can be made, but it is not a personality test in the conventional sense. It is underpinned by thepersonal construct theorydeveloped byGeorge Kelly, first published in 1955.[3]
A grid consists of four parts:
Constructs are regarded as personal to the client, who is psychologically similar to other people depending on the extent to which they would tend to use similar constructs, and similar ratings, in relating to a particular set of elements.
The client is asked to consider the elements three at a time, and to identify a way in which two of the elements might be seen as alike, but distinct from, contrasted to, the third. For example, in considering a set of people as part of a topic dealing with personal relationships, a client might say that the element "my father" and the element "my boss" are similar because they are both fairly tense individuals, whereas the element "my wife" is different because she is "relaxed". And so we identify one construct that the individual uses when thinking about people: whether they are "tenseas distinct fromrelaxed". In practice, good grid interview technique would delve a little deeper and identify some more behaviorally explicit description of "tenseversusrelaxed". All the elements are rated on the construct, further triads of elements are compared and further constructs elicited, and the interview would continue until no further constructs are obtained.
Careful interviewing to identify what the individual means by the words initially proposed, using a 5-point rating system could be used to characterize the way in which a group of fellow-employees are viewed on the construct "keen and committedversusenergies elsewhere", a 1 indicating that the left pole of the construct applies ("keen and committed") and a 5 indicating that the right pole of the construct applies ("energies elsewhere"). On being asked to rate all of the elements, our interviewee might reply that Tom merits a 2 (fairly keen and committed), Mary a 1 (very keen and committed), and Peter a 5 (his energies are very much outside the place of employment). The remaining elements (another five people, for example) are then rated on this construct.
Typically (and depending on the topic) people have a limited number of genuinely different constructs for any one topic: 6 to 16 are common when they talk about their job or their occupation, for example. The richness of people's meaning structures comes from the many different ways in which a limited number of constructs can be applied to individual elements. A person may indicate that Tom is fairly keen, very experienced, lacks social skills, is a good technical supervisor, can be trusted to follow complex instructions accurately, has no sense of humour, will always return a favour but only sometimes help his co-workers, while Mary is very keen, fairly experienced, has good social and technical supervisory skills, needs complex instructions explained to her, appreciates a joke, always returns favours, and is very helpful to her co-workers: these are two very different and complex pictures, using just 8 constructs about a person's co-workers.
Important information can be obtained by including self-elements such as "Myself as I am now"; "Myself as I would like to be" among other elements, where the topic permits.
A single grid can be analysed for both content (eyeball inspection) and structure (cluster analysis,principal component analysis, and a variety of structuralindicesrelating to the complexity and range of the ratings being the chief techniques used). Sets of grids are dealt with using one or other of a variety ofcontent analysistechniques. A range of associated techniques can be used to provide precise, operationally defined expressions of an interviewee's constructs, or a detailed expression of the interviewee's personal values, and all of these techniques are used in a collaborative way. The repertory grid is emphatically not a standardized "psychological test"; it is an exercise in the mutual negotiation of a person's meanings.
The repertory grid has found favour among both academics and practitioners in a great variety of fields because it provides a way of describing people's construct systems (loosely, understanding people's perceptions) without prejudging the terms of reference—a kind of personalizedgrounded theory.[5][6][7]
Unlike a conventionalrating-scalequestionnaire, it is not the investigator but the interviewee who provides the constructs on which a topic is rated. Market researchers, trainers, teachers, guidance counsellors, new product developers, sports scientists, and knowledge capture specialists are among the users who find the technique (originally developed for use in clinical psychology) helpful.[8]
In the bookPersonal Construct Methodology, researchersBrian R. Gainesand Mildred L.G. Shaw noted that they "have also foundconcept mappingandsemantic networktools to be complementary to repertory grid tools and generally use both in most studies" but that they "see less use of network representations in PCP [personal construct psychology] studies than is appropriate".[9]They encouraged practitioners to use semantic network techniques in addition to the repertory grid.[10]
|
https://en.wikipedia.org/wiki/Repertory_grid
|
Inprobability theoryandstatistics, acopulais a multivariatecumulative distribution functionfor which themarginal probabilitydistribution of each variable isuniformon the interval [0, 1]. Copulas are used to describe/model thedependence(inter-correlation) betweenrandom variables.[1]Their name, introduced by applied mathematicianAbe Sklarin 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammaticalcopulasinlinguistics. Copulas have been used widely inquantitative financeto model and minimize tail risk[2]andportfolio-optimizationapplications.[3]
Sklar's theorem states that any multivariatejoint distributioncan be written in terms of univariatemarginal distributionfunctions and a copula which describes the dependence structure between the variables.
Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimating marginals and copulas separately. There are many parametric copula families available, which usually have parameters that control the strength of dependence. Some popular parametric copula models are outlined below.
Two-dimensional copulas are known in some other areas of mathematics under the namepermutonsanddoubly-stochastic measures.
Consider a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}. Suppose its marginals are continuous, i.e. the marginalCDFsFi(x)=Pr[Xi≤x]{\displaystyle F_{i}(x)=\Pr[X_{i}\leq x]}arecontinuous functions. By applying theprobability integral transformto each component, the random vector
has marginals that areuniformly distributedon the interval [0, 1].
The copula of(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}is defined as thejoint cumulative distribution functionof(U1,U2,…,Ud){\displaystyle (U_{1},U_{2},\dots ,U_{d})}:
The copulaCcontains all information on the dependence structure between the components of(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}whereas the marginal cumulative distribution functionsFi{\displaystyle F_{i}}contain all information on the marginal distributions ofXi{\displaystyle X_{i}}.
The reverse of these steps can be used to generatepseudo-randomsamples from general classes ofmultivariate probability distributions. That is, given a procedure to generate a sample(U1,U2,…,Ud){\displaystyle (U_{1},U_{2},\dots ,U_{d})}from the copula function, the required sample can be constructed as
The generalized inversesFi−1{\displaystyle F_{i}^{-1}}are unproblematicalmost surely, since theFi{\displaystyle F_{i}}were assumed to be continuous. Furthermore, the above formula for the copula function can be rewritten as:
Inprobabilisticterms,C:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}is ad-dimensionalcopulaifCis a jointcumulative distribution functionof ad-dimensional random vector on theunit cube[0,1]d{\displaystyle [0,1]^{d}}withuniformmarginals.[4]
Inanalyticterms,C:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}is ad-dimensionalcopulaif
For instance, in the bivariate case,C:[0,1]×[0,1]→[0,1]{\displaystyle C:[0,1]\times [0,1]\rightarrow [0,1]}is a bivariate copula ifC(0,u)=C(u,0)=0{\displaystyle C(0,u)=C(u,0)=0},C(1,u)=C(u,1)=u{\displaystyle C(1,u)=C(u,1)=u}andC(u2,v2)−C(u2,v1)−C(u1,v2)+C(u1,v1)≥0{\displaystyle C(u_{2},v_{2})-C(u_{2},v_{1})-C(u_{1},v_{2})+C(u_{1},v_{1})\geq 0}for all0≤u1≤u2≤1{\displaystyle 0\leq u_{1}\leq u_{2}\leq 1}and0≤v1≤v2≤1{\displaystyle 0\leq v_{1}\leq v_{2}\leq 1}.
Sklar's theorem, named afterAbe Sklar, provides the theoretical foundation for the application of copulas.[5][6]Sklar's theorem states that everymultivariate cumulative distribution function
of a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}can be expressed in terms of its marginalsFi(xi)=Pr[Xi≤xi]{\displaystyle F_{i}(x_{i})=\Pr[X_{i}\leq x_{i}]}and
a copulaC{\displaystyle C}. Indeed:
If the multivariate distribution has a densityh{\displaystyle h}, and if this density is available, it also holds that
wherec{\displaystyle c}is the density of the copula.
The theorem also states that, givenH{\displaystyle H}, the copula is unique onRan(F1)×⋯×Ran(Fd){\displaystyle \operatorname {Ran} (F_{1})\times \cdots \times \operatorname {Ran} (F_{d})}which is thecartesian productof therangesof the marginal cdf's. This implies that the copula is unique if the marginalsFi{\displaystyle F_{i}}are continuous.
The converse is also true: given a copulaC:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}and marginalsFi(x){\displaystyle F_{i}(x)}thenC(F1(x1),…,Fd(xd)){\displaystyle C\left(F_{1}(x_{1}),\dots ,F_{d}(x_{d})\right)}defines ad-dimensional cumulative distribution function with marginal distributionsFi(x){\displaystyle F_{i}(x)}.
Copulas mainly work when time series arestationary[7]and continuous.[8]Thus, a very important pre-processing step is to check for theauto-correlation,trendandseasonalitywithin time series.
When time series are auto-correlated, they may generate a non existing dependence between sets of variables and result in incorrect copula dependence structure.[9]
The Fréchet–Hoeffding theorem (afterMaurice René FréchetandWassily Hoeffding[10]) states that for any copulaC:[0,1]d→[0,1]{\displaystyle C:[0,1]^{d}\rightarrow [0,1]}and any(u1,…,ud)∈[0,1]d{\displaystyle (u_{1},\dots ,u_{d})\in [0,1]^{d}}the following bounds hold:
The functionWis called lower Fréchet–Hoeffding bound and is defined as
The functionMis called upper Fréchet–Hoeffding bound and is defined as
The upper bound is sharp:Mis always a copula, it corresponds tocomonotone random variables.
The lower bound is point-wise sharp, in the sense that for fixedu, there is a copulaC~{\displaystyle {\tilde {C}}}such thatC~(u)=W(u){\displaystyle {\tilde {C}}(u)=W(u)}. However,Wis a copula only in two dimensions, in which case it corresponds to countermonotonic random variables.
In two dimensions, i.e. the bivariate case, the Fréchet–Hoeffding theorem states
Several families of copulas have been described.
The Gaussian copula is a distribution over the unithypercube[0,1]d{\displaystyle [0,1]^{d}}. It is constructed from amultivariate normal distributionoverRd{\displaystyle \mathbb {R} ^{d}}by using theprobability integral transform.
For a givencorrelation matrixR∈[−1,1]d×d{\displaystyle R\in [-1,1]^{d\times d}}, the Gaussian copula with parameter matrixR{\displaystyle R}can be written as
whereΦ−1{\displaystyle \Phi ^{-1}}is the inverse cumulative distribution function of astandard normalandΦR{\displaystyle \Phi _{R}}is the joint cumulative distribution function of a multivariate normal distribution with mean vector zero and covariance matrix equal to the correlation matrixR{\displaystyle R}. While there is no simple analytical formula for the copula function,CRGauss(u){\displaystyle C_{R}^{\text{Gauss}}(u)}, it can be upper or lower bounded, and approximated using numerical integration.[11][12]The density can be written as[13]
whereI{\displaystyle I}is the identity matrix.
Archimedean copulas are an associative class of copulas. Most common Archimedean copulas admit an explicit formula, something not possible for instance for the Gaussian copula.
In practice, Archimedean copulas are popular because they allow modeling dependence in arbitrarily high dimensions with only one parameter, governing the strength of dependence.
A copulaCis called Archimedean if it admits the representation[14]
whereψ:[0,1]×Θ→[0,∞){\displaystyle \psi \!:[0,1]\times \Theta \rightarrow [0,\infty )}is a continuous, strictly decreasing and convex function such thatψ(1;θ)=0{\displaystyle \psi (1;\theta )=0},θ{\displaystyle \theta }is a parameter within some parameter spaceΘ{\displaystyle \Theta }, andψ{\displaystyle \psi }is the so-called generator function andψ−1{\displaystyle \psi ^{-1}}is its pseudo-inverse defined by
Moreover, the above formula forCyields a copula forψ−1{\displaystyle \psi ^{-1}}if and only ifψ−1{\displaystyle \psi ^{-1}}isd-monotoneon[0,∞){\displaystyle [0,\infty )}.[15]That is, if it isd−2{\displaystyle d-2}times differentiable and the derivatives satisfy
for allt≥0{\displaystyle t\geq 0}andk=0,1,…,d−2{\displaystyle k=0,1,\dots ,d-2}and(−1)d−2ψ−1,(d−2)(t;θ){\displaystyle (-1)^{d-2}\psi ^{-1,(d-2)}(t;\theta )}is nonincreasing andconvex.
The following tables highlight the most prominent bivariate Archimedean copulas, with their corresponding generator. Not all of them arecompletely monotone, i.e.d-monotone for alld∈N{\displaystyle d\in \mathbb {N} }ord-monotone for certainθ∈Θ{\displaystyle \theta \in \Theta }only.
In statistical applications, many problems can be formulated in the following way. One is interested in the expectation of a response functiong:Rd→R{\displaystyle g:\mathbb {R} ^{d}\rightarrow \mathbb {R} }applied to some random vector(X1,…,Xd){\displaystyle (X_{1},\dots ,X_{d})}.[18]If we denote the CDF of this random vector withH{\displaystyle H}, the quantity of interest can thus be written as
IfH{\displaystyle H}is given by a copula model, i.e.,
this expectation can be rewritten as
In case the copulaCisabsolutely continuous, i.e.Chas a densityc, this equation can be written as
and if each marginal distribution has the densityfi{\displaystyle f_{i}}it holds further that
If copula and marginals are known (or if they have been estimated), this expectation can be approximated through the following Monte Carlo algorithm:
When studying multivariate data, one might want to investigate the underlying copula. Suppose we have observations
from a random vector(X1,X2,…,Xd){\displaystyle (X_{1},X_{2},\dots ,X_{d})}with continuous marginals. The corresponding “true” copula observations would be
However, the marginal distribution functionsFi{\displaystyle F_{i}}are usually not known. Therefore, one can construct pseudo copula observations by using the empirical distribution functions
instead. Then, the pseudo copula observations are defined as
The corresponding empirical copula is then defined as
The components of the pseudo copula samples can also be written asU~ki=Rki/n{\displaystyle {\tilde {U}}_{k}^{i}=R_{k}^{i}/n}, whereRki{\displaystyle R_{k}^{i}}is the rank of the observationXki{\displaystyle X_{k}^{i}}:
Therefore, the empirical copula can be seen as the empirical distribution of the rank transformed data.
The sample version of Spearman's rho:[19]
Inquantitative financecopulas are applied torisk management, toportfolio managementandoptimization, and toderivatives pricing.
For the former, copulas are used to performstress-testsand robustness checks that are especially important during "downside/crisis/panic regimes" where extreme downside events may occur (e.g., the2008 financial crisis). The formula was also adapted for financial markets and was used to estimate theprobability distributionof losses onpools of loans or bonds.
During a downside regime, a large number of investors who have held positions in riskier assets such as equities or real estate may seek refuge in 'safer' investments such as cash or bonds. This is also known as aflight-to-qualityeffect and investors tend to exit their positions in riskier assets in large numbers in a short period of time. As a result, during downside regimes, correlations across equities are greater on the downside as opposed to the upside and this may have disastrous effects on the economy.[22][23]For example, anecdotally, we often read financial news headlines reporting the loss of hundreds of millions of dollars on the stock exchange in a single day; however, we rarely read reports of positive stock market gains of the same magnitude and in the same short time frame.
Copulas aid in analyzing the effects of downside regimes by allowing the modelling of themarginalsand dependence structure of a multivariate probability model separately. For example, consider the stock exchange as a market consisting of a large number of traders each operating with his/her own strategies to maximize profits. The individualistic behaviour of each trader can be described by modelling the marginals. However, as all traders operate on the same exchange, each trader's actions have an interaction effect with other traders'. This interaction effect can be described by modelling the dependence structure. Therefore, copulas allow us to analyse the interaction effects which are of particular interest during downside regimes as investors tend toherd their trading behaviour and decisions. (See alsoagent-based computational economics, where price is treated as anemergent phenomenon, resulting from the interaction of the various market participants, or agents.)
The users of the formula have been criticized for creating "evaluation cultures" that continued to use simple copulæ despite the simple versions being acknowledged as inadequate for that purpose.[24][25]Thus, previously, scalable copula models for large dimensions only allowed the modelling of elliptical dependence structures (i.e., Gaussian and Student-t copulas) that do not allow for correlation asymmetries where correlations differ on the upside or downside regimes. However, the development ofvine copulas[26](also known as pair copulas) enables the flexible modelling of the dependence structure for portfolios of large dimensions.[27]The Clayton canonical vine copula allows for the occurrence of extreme downside events and has been successfully applied inportfolio optimizationand risk management applications. The model is able to reduce the effects of extreme downside correlations and produces improved statistical and economic performance compared to scalable elliptical dependence copulas such as the Gaussian and Student-t copula.[28]
Other models developed for risk management applications are panic copulas that are glued with market estimates of the marginal distributions to analyze the effects ofpanic regimeson the portfolio profit and loss distribution. Panic copulas are created byMonte Carlo simulation, mixed with a re-weighting of the probability of each scenario.[29]
As regardsderivatives pricing, dependence modelling with copula functions is widely used in applications offinancial risk assessmentandactuarial analysis– for example in the pricing ofcollateralized debt obligations(CDOs).[30]Some believe the methodology of applying the Gaussian copula tocredit derivativesto be one of the causes of the2008 financial crisis;[31][32][33]seeDavid X. Li § CDOs and Gaussian copula.
Despite this perception, there are documented attempts within the financial industry, occurring before the crisis, to address the limitations of the Gaussian copula and of copula functions more generally, specifically the lack of dependence dynamics. The Gaussian copula is lacking as it only allows for an elliptical dependence structure, as dependence is only modeled using the variance-covariance matrix.[28]This methodology is limited such that it does not allow for dependence to evolve as the financial markets exhibit asymmetric dependence, whereby correlations across assets significantly increase during downturns compared to upturns. Therefore, modeling approaches using the Gaussian copula exhibit a poor representation ofextreme events.[28][34]There have been attempts to propose models rectifying some of the copula limitations.[34][35][36]
Additional to CDOs, copulas have been applied to other asset classes as a flexible tool in analyzing multi-asset derivative products. The first such application outside credit was to use a copula to construct abasketimplied volatilitysurface,[37]taking into account thevolatility smileof basket components. Copulas have since gained popularity in pricing and risk management[38]of options on multi-assets in the presence of a volatility smile, inequity-,foreign exchange-andfixed income derivatives.
Recently, copula functions have been successfully applied to the database formulation for thereliabilityanalysis of highway bridges, and to various multivariatesimulationstudies in civil engineering,[39]reliability of wind and earthquake engineering,[40]and mechanical & offshore engineering.[41]Researchers are also trying these functions in the field of transportation to understand the interaction between behaviors of individual drivers which, in totality, shapes traffic flow.
Copulas are being used forreliabilityanalysis of complex systems of machine components with competing failure modes.[42]
Copulas are being used forwarrantydata analysis in which the tail dependence is analysed.[43]
Copulas are used in modelling turbulent partially premixed combustion, which is common in practical combustors.[44][45]
Copulæ have many applications in the area ofmedicine, for example,
The combination of SSA and copula-based methods have been applied for the first time as a novel stochastic tool for Earth Orientation Parameters prediction.[60][61]
Copulas have been used in both theoretical and applied analyses of hydroclimatic data. Theoretical studies adopted the copula-based methodology for instance to gain a better understanding of the dependence structures of temperature and precipitation, in different parts of the world.[9][62][63]Applied studies adopted the copula-based methodology to examine e.g., agricultural droughts[64]or joint effects of temperature and precipitation extremes on vegetation growth.[65]
Copulas have been extensively used in climate- and weather-related research.[66][67]
Copulas have been used to estimate thesolar irradiancevariability in spatial networks and temporally for single locations.[68][69]
Large synthetic traces of vectors and stationary time series can be generated using empirical copula while preserving the entire dependence structure of small datasets.[70]Such empirical traces are useful in various simulation-based performance studies.[71]
Copulas have been used for quality ranking in the manufacturing of electronically commutated motors.[72]
Copulas are important because they represent a dependence structure without usingmarginal distributions. Copulas have been widely used in the field offinance, but their use insignal processingis relatively new. Copulas have been employed in the field ofwirelesscommunicationfor classifyingradarsignals, change detection inremote sensingapplications, andEEGsignal processinginmedicine. In this section, a short mathematical derivation to obtain copula density function followed by a table providing a list of copula density functions with the relevant signal processing applications are presented.
Copulas have been used for determining the core radio luminosity function of Active galactic Nuclei (AGNs),[73]while this cannot be realized using traditional methods due to the difficulties in sample completeness.
For any two random variablesXandY, the continuous jointprobability distributionfunction can be written as
whereFX(x)=Pr{X≤x}{\textstyle F_{X}(x)=\Pr {\begin{Bmatrix}X\leq {x}\end{Bmatrix}}}andFY(y)=Pr{Y≤y}{\textstyle F_{Y}(y)=\Pr {\begin{Bmatrix}Y\leq {y}\end{Bmatrix}}}are the marginal cumulative distribution functions of the random variablesXandY, respectively.
then the copula distribution functionC(u,v){\displaystyle C(u,v)}can be defined using Sklar's theorem[74][6]as:
whereu=FX(x){\displaystyle u=F_{X}(x)}andv=FY(y){\displaystyle v=F_{Y}(y)}are marginal distribution functions,FXY(x,y){\displaystyle F_{XY}(x,y)}joint andu,v∈(0,1){\displaystyle u,v\in (0,1)}.
AssumingFXY(⋅,⋅){\displaystyle F_{XY}(\cdot ,\cdot )}is a.e. twice differentiable, we start by using the relationship between joint probability density function (PDF) and joint cumulative distribution function (CDF) and its partial derivatives.
wherec(u,v){\displaystyle c(u,v)}is the copula density function,fX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}are the marginal probability density functions ofXandY, respectively. There are four elements in this equation, and if any three elements are known, the fourth element can be calculated. For example, it may be used,
Various bivariate copula density functions are important in the area of signal processing.u=FX(x){\displaystyle u=F_{X}(x)}andv=FY(y){\displaystyle v=F_{Y}(y)}are marginal distributions functions andfX(x){\displaystyle f_{X}(x)}andfY(y){\displaystyle f_{Y}(y)}are marginal density functions. Extension and generalization of copulas for statistical signal processing have been shown to construct new bivariate copulas for exponential, Weibull, and Rician distributions. Zeng et al.[75]presented algorithms, simulation, optimal selection, and practical applications of these copulas in signal processing.
validating biometric authentication,[77]modeling stochastic dependence in large-scale integration of wind power,[78]unsupervised classification of radar signals[79]
fusion of correlated sensor decisions[92]
|
https://en.wikipedia.org/wiki/Copula_(statistics)
|
Problem structuring methods(PSMs) are a group of techniques used tomodelor tomapthe nature or structure of a situation orstate of affairsthat some people want to change.[1]PSMs are usually used by a group of people incollaboration(rather than by a solitary individual) to create aconsensusabout, or at least to facilitatenegotiationsabout, what needs to change.[2]Some widely adopted PSMs[1]include
Unlike someproblem solvingmethods that assume that all the relevant issues and constraints and goals that constitute the problem are defined in advance or are uncontroversial, PSMs assume that there is no single uncontested representation of what constitutes the problem.[6]
PSMs are mostly used with groups of people, but PSMs have also influenced thecoachingandcounselingof individuals.[7]
The term "problem structuring methods" as a label for these techniques began to be used in the 1980s in the field ofoperations research,[8]especially after the publication of the bookRational Analysis for a Problematic World: Problem Structuring Methods for Complexity, Uncertainty and Conflict.[9]Some of the methods that came to be called PSMs had been in use since the 1960s.[2]
Thinkers who later came to be recognized as significant early contributors to the theory and practice of PSMs include:[10]
In discussions of problem structuring methods, it is common to distinguish between two different types of situations that could be considered to be problems.[17]Rittel and Webber's distinction between tame problems andwicked problems(Rittel & Webber 1973) is a well known example of such types.[17]The following table lists similar (but not exactly equivalent) distinctions made by a number of thinkers between two types of "problem" situations, which can be seen as a continuum between a left and right extreme:[18]
Tame problems(or puzzles or technical challenges) have relatively precise, straightforward formulations that are often amenable to solution with some predetermined technical fix or algorithm. It is clear when these situations have changed in such a way that the problem can be called solved.
Wicked problems(or messes or adaptive challenges) have multiple interacting issues with multiplestakeholdersand uncertainties and no definitive formulation. These situations are complex and have nostopping ruleand no ultimate test of a solution.
PSMs were developed for situations that tend toward the wicked or "soft" side, when methods are needed that assistargumentationabout, or that generate mutual understanding of multiple perspectives on, a complex situation.[17]Other problem solving methods are better suited to situations toward the tame or "hard" side where a reliable and optimal solution is needed to a problem that can be clearly and uncontroversially defined.
Problem structuring methods constitute a family of approaches that have differing purposes and techniques, and many of them had been developed independently before people began to notice their family resemblance.[17]Several scholars have noted the common and divergent characteristics among PSMs.
Eden and Ackermann identified four characteristics that problem structuring methods have in common:[19]
Rosenhead provided another list of common characteristics of PSMs, formulated in a more prescriptive style:[20]
An early literature review of problem structuring proposed grouping the texts reviewed into "four streams of thought" that describe some major differences between methods:[21]
Mingers and Rosenhead have noted that there are similarities and differences between PSMs andlarge group methodssuch as Future Search,Open Space Technology, and others.[22]PSMs and large group methods both bring people together to talk about, and to share different perspectives on, a situation or state of affairs that some people want to change. However, PSMs always focus on creating a sufficiently rigorousconceptual modelorcognitive mapof the situation, whereas large group methods do not necessarily emphasize modeling, and PSMs are not necessarily used with large groups of people.[22]
There is significant overlap or shared characteristics between PSMs and some of the techniques used inparticipatory rural appraisal(PRA). Mingers and Rosenhead pointed out that in situations where people have low literacy, the nonliterate (oral and visual) techniques developed in PRA would be a necessary complement to PSMs, and the approaches to modeling in PSMs could be (and have been) used by practitioners of PRA.[23]
In 2004, Mingers and Rosenhead published a literature review of papers that had been published inscholarly journalsand that reported practical applications of PSMs.[24]Their literature survey covered the period up to 1998, which was "relatively early in the development of interest in PSMs",[25]and categorized 51 reported applications under the following application areas: general organizational applications; information systems; technology, resources, planning; health services; and general research. Examples of applications reported included: designing a parliamentary briefing system, modeling theSan Francisco Zoo, developing abusiness strategyandinformation systemstrategy, planning livestock management in Nepal, regional planning in South Africa, modeling hospital outpatient services, and eliciting knowledge about pesticides.[24]
PSMs are a generalmethodologyand are not necessarily dependent on electronicinformation technology,[26]but PSMs do rely on some kind ofshared displayof the models that participants are developing. The shared display could beflip charts, a largewhiteboard,Post-it noteson the meeting room walls, and/or apersonal computerconnected to avideo projector.[26]After PSMs have been used in a group work session, it is normal for a record of the session's display to be shared with participants and with other relevant people.[26]
Software programs for supporting problem structuring include Banxia Decision Explorer and Group Explorer,[27]which implementcognitive mappingfor strategic options development and analysis (SODA), andCompendium, which implementsIBISfordialogue mappingand related methods;[28]a similar program is called Wisdom.[29]Such software can serve a variety of functions, such as simple technical assistance to the group facilitator during a single event, or more long-term online groupdecision support systems.
Some practitioners prefer not to use computers during group work sessions because of the effect they have ongroup dynamics, but such use of computers is standard in some PSMs such as SODA[27]and dialogue mapping,[28]in which computer display of models or maps is intended to guide conversation in the most efficient way.[26]
In some situations additional software that is not used only for PSMs may be incorporated into the problem structuring process; examples includespreadsheetmodeling,system dynamics software[30]orgeographic information systems.[31]Some practitioners, who have focused on buildingsystem dynamicssimulation models with groups of people, have called their workgroup model building(GMB) and have concluded "that GMB is another PSM".[32]GMB has also been used in combination with SODA.[33]
|
https://en.wikipedia.org/wiki/Problem_structuring_methods
|
Multi-task learning(MTL) is a subfield ofmachine learningin which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately.[1][2][3]Inherently, Multi-task learning is amulti-objective optimizationproblem havingtrade-offsbetween different tasks.[4]Early versions of MTL were called "hints".[5][6]
In a widely cited 1997 paper, Rich Caruana gave the following characterization:
Multitask Learning is an approach toinductive transferthat improvesgeneralizationby using the domain information contained in the training signals of related tasks as aninductive bias. It does this by learning tasks in parallel while using a sharedrepresentation; what is learned for each task can help other tasks be learned better.[3]
In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance.[citation needed]Further examples of settings for MTL includemulticlass classificationandmulti-label classification.[7]
Multi-task learning works becauseregularizationinduced by requiring an algorithm to perform well on a related task can be superior to regularization that preventsoverfittingby penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled.[8]However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.[8][9]
The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge:
Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is alinear combinationof some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, withsparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases.[10]Task relatedness can be imposed a priori or learned from the data.[7][11]Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly.[8][12]For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains.[8]
One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to beorthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods.[9]
Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deepconvolutional neural networkGoogLeNet,[13]an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task.[14]
Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termedGroup online adaptive learning(GOAL).[15]Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predictingfinancial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents.
Multi-task optimizationfocuses on solving optimizing the whole process.[16][17]The paradigm has been inspired by the well-established concepts oftransfer learning[18]and multi-task learning inpredictive analytics.[19]
The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes,[20]the search progress can be transferred to substantially accelerate the search on the other.
The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems.[21]
There is a direct relationship between multitask optimization andmulti-objective optimization.[22]
In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models.[23]Commonly, MTL models employ task-specific modules on top of a joint feature representation obtained using a shared module. Since this joint representation must capture useful features across all tasks, MTL may hinder individual task performance if the different tasks seek conflicting representation, i.e., the gradients of different tasks point to opposing directions or differ significantly in magnitude. This phenomenon is commonly referred to as negative transfer. To mitigate this issue, various MTL optimization methods have been proposed. Commonly, the per-task gradients are combined into a joint update direction through various aggregation algorithms or heuristics.
There are several common approaches for multi-task optimization:Bayesian optimization,evolutionary computation, and approaches based onGame theory.[16]
Multi-task Bayesian optimizationis a modern model-based approach that leverages the concept of knowledge transfer to speed up the automatichyperparameter optimizationprocess of machine learning algorithms.[24]The method builds a multi-taskGaussian processmodel on the data originating from different searches progressing in tandem.[25]The captured inter-task dependencies are thereafter utilized to better inform the subsequent sampling of candidate solutions in respective search spaces.
Evolutionary multi-taskinghas been explored as a means of exploiting theimplicit parallelismof population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover.[17][26]Recently, modes of knowledge transfer that are different from direct solutioncrossoverhave been explored.[27][28]
Game-theoretic approaches to multi-task optimization propose to view the optimization problem as a game, where each task is a player. All players compete through the reward matrix of the game, and try to reach a solution that satisfies all players (all tasks). This view provide insight about how to build efficient algorithms based ongradient descentoptimization (GD), which is particularly important for trainingdeep neural networks.[29]In GD for MTL, the problem is that each task provides its own loss, and it is not clear how to combine all losses and create a single unified gradient, leading to several different aggregation strategies.[30][31][32]This aggregation problem can be solved by defining a game matrix where the reward of each player is the agreement of its own gradient with the common gradient, and then setting the common gradient to be the NashCooperative bargaining[33]of that system.
Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner.[26]Inmachine learning, the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.[34][35]In addition, the concept of multi-tasking has led to advances in automatichyperparameter optimizationof machine learning models andensemble learning.[36][37]
Applications have also been reported in cloud computing,[38]with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously.[17][39]Recent work has additionally shown applications in chemistry.[40]In addition, some recent works have applied multi-task optimization algorithms in industrial manufacturing.[41][42]
The MTL problem can be cast within the context of RKHSvv (acompleteinner product spaceofvector-valued functionsequipped with areproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015.[7]
Suppose the training data set isSt={(xit,yit)}i=1nt{\displaystyle {\mathcal {S}}_{t}=\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{n_{t}}}, withxit∈X{\displaystyle x_{i}^{t}\in {\mathcal {X}}},yit∈Y{\displaystyle y_{i}^{t}\in {\mathcal {Y}}}, wheretindexes task, andt∈1,...,T{\displaystyle t\in 1,...,T}. Letn=∑t=1Tnt{\displaystyle n=\sum _{t=1}^{T}n_{t}}. In this setting there is a consistent input and output space and the sameloss functionL:R×R→R+{\displaystyle {\mathcal {L}}:\mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R} _{+}}for each task: . This results in the regularized machine learning problem:
whereH{\displaystyle {\mathcal {H}}}is a vector valued reproducing kernel Hilbert space with functionsf:X→YT{\displaystyle f:{\mathcal {X}}\rightarrow {\mathcal {Y}}^{T}}having componentsft:X→Y{\displaystyle f_{t}:{\mathcal {X}}\rightarrow {\mathcal {Y}}}.
The reproducing kernel for the spaceH{\displaystyle {\mathcal {H}}}of functionsf:X→RT{\displaystyle f:{\mathcal {X}}\rightarrow \mathbb {R} ^{T}}is a symmetric matrix-valued functionΓ:X×X→RT×T{\displaystyle \Gamma :{\mathcal {X}}\times {\mathcal {X}}\rightarrow \mathbb {R} ^{T\times T}}, such thatΓ(⋅,x)c∈H{\displaystyle \Gamma (\cdot ,x)c\in {\mathcal {H}}}and the following reproducing property holds:
The reproducing kernel gives rise to a representer theorem showing that any solution to equation1has the form:
The form of the kernelΓinduces both the representation of thefeature spaceand structures the output across tasks. A natural simplification is to choose aseparable kernel,which factors into separate kernels on the input spaceXand on the tasks{1,...,T}{\displaystyle \{1,...,T\}}. In this case the kernel relating scalar componentsft{\displaystyle f_{t}}andfs{\displaystyle f_{s}}is given byγ((xi,t),(xj,s))=k(xi,xj)kT(s,t)=k(xi,xj)As,t{\textstyle \gamma ((x_{i},t),(x_{j},s))=k(x_{i},x_{j})k_{T}(s,t)=k(x_{i},x_{j})A_{s,t}}. For vector valued functionsf∈H{\displaystyle f\in {\mathcal {H}}}we can writeΓ(xi,xj)=k(xi,xj)A{\displaystyle \Gamma (x_{i},x_{j})=k(x_{i},x_{j})A}, wherekis a scalar reproducing kernel, andAis a symmetric positive semi-definiteT×T{\displaystyle T\times T}matrix. Henceforth denoteS+T={PSD matrices}⊂RT×T{\displaystyle S_{+}^{T}=\{{\text{PSD matrices}}\}\subset \mathbb {R} ^{T\times T}}.
This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely byA. Methods for non-separable kernelsΓis a current field of research.
For the separable case, the representation theorem is reduced tof(x)=∑i=1Nk(x,xi)Aci{\textstyle f(x)=\sum _{i=1}^{N}k(x,x_{i})Ac_{i}}. The model output on the training data is thenKCA, whereKis then×n{\displaystyle n\times n}empirical kernel matrix with entriesKi,j=k(xi,xj){\textstyle K_{i,j}=k(x_{i},x_{j})}, andCis then×T{\displaystyle n\times T}matrix of rowsci{\displaystyle c_{i}}.
With the separable kernel, equation1can be rewritten as
whereVis a (weighted) average ofLapplied entry-wise toYandKCA. (The weight is zero ifYit{\displaystyle Y_{i}^{t}}is a missing observation).
Note the second term inPcan be derived as follows:
There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping.
Regularizer—With the separable kernel, it can be shown (below) that||f||H2=∑s,t=1TAt,s†⟨fs,ft⟩Hk{\textstyle ||f||_{\mathcal {H}}^{2}=\sum _{s,t=1}^{T}A_{t,s}^{\dagger }\langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}, whereAt,s†{\displaystyle A_{t,s}^{\dagger }}is thet,s{\displaystyle t,s}element of the pseudoinverse ofA{\displaystyle A}, andHk{\displaystyle {\mathcal {H}}_{k}}is the RKHS based on the scalar kernelk{\displaystyle k}, andft(x)=∑i=1nk(x,xi)At⊤ci{\textstyle f_{t}(x)=\sum _{i=1}^{n}k(x,x_{i})A_{t}^{\top }c_{i}}. This formulation shows thatAt,s†{\displaystyle A_{t,s}^{\dagger }}controls the weight of the penalty associated with⟨fs,ft⟩Hk{\textstyle \langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}. (Note that⟨fs,ft⟩Hk{\textstyle \langle f_{s},f_{t}\rangle _{{\mathcal {H}}_{k}}}arises from||ft||Hk=⟨ft,ft⟩Hk{\textstyle ||f_{t}||_{{\mathcal {H}}_{k}}=\langle f_{t},f_{t}\rangle _{{\mathcal {H}}_{k}}}.)
‖f‖H2=⟨∑i=1nγ((xi,ti),⋅)citi,∑j=1nγ((xj,tj),⋅)cjtj⟩H=∑i,j=1nciticjtjγ((xi,ti),(xj,tj))=∑i,j=1n∑s,t=1Tcitcjsk(xi,xj)As,t=∑i,j=1nk(xi,xj)⟨ci,Acj⟩RT=∑i,j=1nk(xi,xj)⟨ci,AA†Acj⟩RT=∑i,j=1nk(xi,xj)⟨Aci,A†Acj⟩RT=∑i,j=1n∑s,t=1T(Aci)t(Acj)sk(xi,xj)As,t†=∑s,t=1TAs,t†⟨∑i=1nk(xi,⋅)(Aci)t,∑j=1nk(xj,⋅)(Acj)s⟩Hk=∑s,t=1TAs,t†⟨ft,fs⟩Hk{\displaystyle {\begin{aligned}\|f\|_{\mathcal {H}}^{2}&=\left\langle \sum _{i=1}^{n}\gamma ((x_{i},t_{i}),\cdot )c_{i}^{t_{i}},\sum _{j=1}^{n}\gamma ((x_{j},t_{j}),\cdot )c_{j}^{t_{j}}\right\rangle _{\mathcal {H}}\\&=\sum _{i,j=1}^{n}c_{i}^{t_{i}}c_{j}^{t_{j}}\gamma ((x_{i},t_{i}),(x_{j},t_{j}))\\&=\sum _{i,j=1}^{n}\sum _{s,t=1}^{T}c_{i}^{t}c_{j}^{s}k(x_{i},x_{j})A_{s,t}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle c_{i},Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle c_{i},AA^{\dagger }Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}k(x_{i},x_{j})\langle Ac_{i},A^{\dagger }Ac_{j}\rangle _{\mathbb {R} ^{T}}\\&=\sum _{i,j=1}^{n}\sum _{s,t=1}^{T}(Ac_{i})^{t}(Ac_{j})^{s}k(x_{i},x_{j})A_{s,t}^{\dagger }\\&=\sum _{s,t=1}^{T}A_{s,t}^{\dagger }\langle \sum _{i=1}^{n}k(x_{i},\cdot )(Ac_{i})^{t},\sum _{j=1}^{n}k(x_{j},\cdot )(Ac_{j})^{s}\rangle _{{\mathcal {H}}_{k}}\\&=\sum _{s,t=1}^{T}A_{s,t}^{\dagger }\langle f_{t},f_{s}\rangle _{{\mathcal {H}}_{k}}\end{aligned}}}
Output metric—an alternative output metric onYT{\displaystyle {\mathcal {Y}}^{T}}can be induced by the inner product⟨y1,y2⟩Θ=⟨y1,Θy2⟩RT{\displaystyle \langle y_{1},y_{2}\rangle _{\Theta }=\langle y_{1},\Theta y_{2}\rangle _{\mathbb {R} ^{T}}}. With the squared loss there is an equivalence between the separable kernelsk(⋅,⋅)IT{\displaystyle k(\cdot ,\cdot )I_{T}}under the alternative metric, andk(⋅,⋅)Θ{\displaystyle k(\cdot ,\cdot )\Theta }, under the canonical metric.
Output mapping—Outputs can be mapped asL:YT→Y~{\displaystyle L:{\mathcal {Y}}^{T}\rightarrow {\mathcal {\tilde {Y}}}}to a higher dimensional space to encode complex structures such as trees, graphs and strings. For linear mapsL, with appropriate choice of separable kernel, it can be shown thatA=L⊤L{\displaystyle A=L^{\top }L}.
Via the regularizer formulation, one can represent a variety of task structures easily.
Learning problemPcan be generalized to admit learning task matrix A as follows:
Choice ofF:S+T→R+{\displaystyle F:S_{+}^{T}\rightarrow \mathbb {R} _{+}}must be designed to learn matricesAof a given type. See "Special cases" below.
Restricting to the case ofconvexlosses andcoercivepenalties Cilibertoet al.have shown that althoughQis not convex jointly inCandA,a related problem is jointly convex.
Specifically on the convex setC={(C,A)∈Rn×T×S+T|Range(C⊤KC)⊆Range(A)}{\displaystyle {\mathcal {C}}=\{(C,A)\in \mathbb {R} ^{n\times T}\times S_{+}^{T}|Range(C^{\top }KC)\subseteq Range(A)\}}, the equivalent problem
is convex with the same minimum value. And if(CR,AR){\displaystyle (C_{R},A_{R})}is a minimizer forRthen(CRAR†,AR){\displaystyle (C_{R}A_{R}^{\dagger },A_{R})}is a minimizer forQ.
Rmay be solved by a barrier method on a closed set by introducing the following perturbation:
The perturbation via the barrierδ2tr(A†){\displaystyle \delta ^{2}tr(A^{\dagger })}forces the objective functions to be equal to+∞{\displaystyle +\infty }on the boundary ofRn×T×S+T{\displaystyle R^{n\times T}\times S_{+}^{T}}.
Scan be solved with a block coordinate descent method, alternating inCandA.This results in a sequence of minimizers(Cm,Am){\displaystyle (C_{m},A_{m})}inSthat converges to the solution inRasδm→0{\displaystyle \delta _{m}\rightarrow 0}, and hence gives the solution toQ.
Spectral penalties- Dinnuzoet al[43]suggested settingFas the Frobenius normtr(A⊤A){\displaystyle {\sqrt {tr(A^{\top }A)}}}. They optimizedQdirectly using block coordinate descent, not accounting for difficulties at the boundary ofRn×T×S+T{\displaystyle \mathbb {R} ^{n\times T}\times S_{+}^{T}}.
Clustered tasks learning- Jacobet al[44]suggested to learnAin the setting whereTtasks are organized inRdisjoint clusters. In this case letE∈{0,1}T×R{\displaystyle E\in \{0,1\}^{T\times R}}be the matrix withEt,r=I(taskt∈groupr){\displaystyle E_{t,r}=\mathbb {I} ({\text{task }}t\in {\text{group }}r)}. SettingM=I−E†ET{\displaystyle M=I-E^{\dagger }E^{T}}, andU=1T11⊤{\displaystyle U={\frac {1}{T}}\mathbf {11} ^{\top }}, the task matrixA†{\displaystyle A^{\dagger }}can be parameterized as a function ofM{\displaystyle M}:A†(M)=ϵMU+ϵB(M−U)+ϵ(I−M){\displaystyle A^{\dagger }(M)=\epsilon _{M}U+\epsilon _{B}(M-U)+\epsilon (I-M)}, with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxationSc={M∈S+T:I−M∈S+T∧tr(M)=r}{\displaystyle {\mathcal {S}}_{c}=\{M\in S_{+}^{T}:I-M\in S_{+}^{T}\land tr(M)=r\}}. In this formulation,F(A)=I(A(M)∈{A:M∈SC}){\displaystyle F(A)=\mathbb {I} (A(M)\in \{A:M\in {\mathcal {S}}_{C}\})}.
Non-convex penalties- Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases.
Non-separable kernels- Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels.
A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR)[45]implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning,[46][47]Multi-Task Learning with Joint Feature Selection,[48]Robust Multi-Task Feature Learning,[49]Trace-Norm Regularized Multi-Task Learning,[50]Alternating Structural Optimization,[51][52]Incoherent Low-Rank and Sparse Learning,[53]Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning,[54][55]Multi-Task Learning with Graph Structures.
|
https://en.wikipedia.org/wiki/Multitask_optimization
|
Inductive probabilityattempts to give theprobabilityof future events based on past events. It is the basis forinductive reasoning, and gives the mathematical basis forlearningand the perception of patterns. It is a source ofknowledgeabout the world.
There are three sources of knowledge:inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Inference establishes new facts from data. Its basis isBayes' theorem.
Information describing the world is written in a language. For example, a simple mathematical language of propositions may be chosen. Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. This internal language implicitly represents probabilities of statements.
Occam's razorsays the "simplest theory, consistent with the data is most likely to be correct". The "simplest theory" is interpreted as the representation of the theory written in this internal language. The theory with the shortest encoding in this internal language is most likely to be correct.
Probability and statistics was focused onprobability distributionsand tests of significance. Probability was formal, well defined, but limited in scope. In particular its application was limited to situations that could be defined as an experiment or trial, with a well defined population.
Bayes's theoremis named after Rev.Thomas Bayes1701–1761.Bayesian inferencebroadened the application of probability to many situations where a population was not well defined. But Bayes' theorem always depended on prior probabilities, to generate new probabilities. It was unclear where these prior probabilities should come from.
Ray Solomonoffdevelopedalgorithmic probabilitywhich gave an explanation for what randomness is and how patterns in the data may be represented by computer programs, that give shorter representations of the data circa 1964.
Chris Wallaceand D. M. Boulton developedminimum message lengthcirca 1968. LaterJorma Rissanendeveloped theminimum description lengthcirca 1978. These methods allowinformation theoryto be related to probability, in a way that can be compared to the application of Bayes' theorem, but which give a source and explanation for the role of prior probabilities.
Marcus Huttercombineddecision theorywith the work of Ray Solomonoff andAndrey Kolmogorovto give a theory for thePareto optimalbehavior for anIntelligent agent, circa 1998.
The program with the shortest length that matches the data is the most likely to predict future data. This is the thesis behind theminimum message length[1]andminimum description length[2]methods.
At first sightBayes' theoremappears different from the minimimum message/description length principle. At closer inspection it turns out to be the same. Bayes' theorem is about conditional probabilities, and states the probability that eventBhappens if firstly eventAhappens:
becomes in terms of message lengthL,
This means that if all the information is given describing an event then the length of the information may be used to give the raw probability of the event. So if the information describing the occurrence ofAis given, along with the information describingBgivenA, then all the information describingAandBhas been given.[3][4]
Overfittingoccurs when the model matches the random noise and not the pattern in the data. For example, take the situation where a curve is fitted to a set of points. If a polynomial with many terms is fitted then it can more closely represent the data. Then the fit will be better, and the information needed to describe the deviations from the fitted curve will be smaller. Smaller information length means higher probability.
However, the information needed to describe the curve must also be considered. The total information for a curve with many terms may be greater than for a curve with fewer terms, that has not as good a fit, but needs less information to describe the polynomial.
Solomonoff's theory of inductive inferenceis also inductive inference. A bit stringxis observed. Then consider all programs that generate strings starting withx. Cast in the form of inductive inference, the programs are theories that imply the observation of the bit stringx.
The method used here to give probabilities for inductive inference is based onSolomonoff's theory of inductive inference.
If all the bits are 1, then people infer that there is a bias in the coin and that it is more likely also that the next bit is 1 also. This is described as learning from, or detecting a pattern in the data.
Such a pattern may be represented by acomputer program. A short computer program may be written that produces a series of bits which are all 1. If the length of the programKisL(K){\displaystyle L(K)}bits then its prior probability is,
The length of the shortest program that represents the string of bits is called theKolmogorov complexity.
Kolmogorov complexity is not computable. This is related to thehalting problem. When searching for the shortest program some programs may go into an infinite loop.
The Greek philosopherEpicurusis quoted as saying "If more than one theory is consistent with the observations, keep all theories".[5]
As in a crime novel all theories must be considered in determining the likely murderer, so with inductive probability all programs must be considered in determining the likely future bits arising from the stream of bits.
Programs that are already longer thannhave no predictive power. The raw (or prior) probability that the pattern of bits is random (has no pattern) is2−n{\displaystyle 2^{-n}}.
Each program that produces the sequence of bits, but is shorter than thenis a theory/pattern about the bits with a probability of2−k{\displaystyle 2^{-k}}wherekis the length of the program.
The probability of receiving a sequence of bitsyafter receiving a series of bitsxis then theconditional probabilityof receivingygivenx, which is the probability ofxwithyappended, divided by the probability ofx.[6][7][8]
The programming language affects the predictions of the next bit in the string. The language acts as aprior probability. This is particularly a problem where the programming language codes for numbers and other data types. Intuitively we think that 0 and 1 are simple numbers, and that prime numbers are somehow more complex than numbers that may be composite.
Using theKolmogorov complexitygives an unbiased estimate (a universal prior) of the prior probability of a number. As a thought experiment anintelligent agentmay be fitted with a data input device giving a series of numbers, after applying some transformation function to the raw numbers. Another agent might have the same input device with a different transformation function. The agents do not see or know about these transformation functions. Then there appears no rational basis for preferring one function over another. A universal prior insures that although two agents may have different initial probability distributions for the data input, the difference will be bounded by a constant.
So universal priors do not eliminate an initial bias, but they reduce and limit it. Whenever we describe an event in a language, either using a natural language or other, the language has encoded in it our prior expectations. So some reliance on prior probabilities are inevitable.
A problem arises where an intelligent agent's prior expectations interact with the environment to form a self reinforcing feed back loop. This is the problem of bias or prejudice. Universal priors reduce but do not eliminate this problem.
The theory ofuniversal artificial intelligenceappliesdecision theoryto inductive probabilities. The theory shows how the best actions to optimize a reward function may be chosen. The result is a theoretical model of intelligence.[9]
It is a fundamental theory of intelligence, which optimizes the agents behavior in,
In general no agent will always provide the best actions in all situations. A particular choice made by an agent may be wrong, and the environment may provide no way for the agent to recover from an initial bad choice. However the agent isPareto optimalin the sense that no other agent will do better than this agent in this environment, without doing worse in another environment. No other agent may, in this sense, be said to be better.
At present the theory is limited by incomputability (thehalting problem). Approximations may be used to avoid this. Processing speed andcombinatorial explosionremain the primary limiting factors forartificial intelligence.
Probability is the representation of uncertain or partial knowledge about the truth of statements. Probabilities are subjective and personal estimates of likely outcomes based on past experience and inferences made from the data.
This description of probability may seem strange at first. In natural language we refer to "the probability" that the sun will rise tomorrow. We do not refer to "your probability" that the sun will rise. But in order for inference to be correctly modeled probability must be personal, and the act of inference generates new posterior probabilities from prior probabilities.
Probabilities are personal because they are conditional on the knowledge of the individual. Probabilities are subjective because they always depend, to some extent, on prior probabilities assigned by the individual. Subjective should not be taken here to mean vague or undefined.
The termintelligent agentis used to refer to the holder of the probabilities. The intelligent agent may be a human or a machine. If the intelligent agent does not interact with the environment then the probability will converge over time to the frequency of the event.
If however the agent uses the probability to interact with the environment there may be a feedback, so that two agents in the identical environment starting with only slightly different priors, end up with completely different probabilities. In this case optimaldecision theoryas inMarcus Hutter'sUniversal Artificial Intelligence will givePareto optimalperformance for the agent. This means that no other intelligent agent could do better in one environment without doing worse in another environment.
In deductive probability theories, probabilities are absolutes, independent of the individual making the assessment. But deductive probabilities are based on,
For example, in a trial the participants are aware the outcome of all the previous history of trials. They also assume that each outcome is equally probable. Together this allows a single unconditional value of probability to be defined.
But in reality each individual does not have the same information. And in general the probability of each outcome is not equal. The dice may be loaded, and this loading needs to be inferred from the data.
Theprinciple of indifferencehas played a key role in probability theory. It says that if N statements are symmetric so that one condition cannot be preferred over another then all statements are equally probable.[10]
Taken seriously, in evaluating probability this principle leads to contradictions. Suppose there are 3 bags of gold in the distance and one is asked to select one. Then because of the distance one cannot see the bag sizes. You estimate using the principle of indifference that each bag has equal amounts of gold, and each bag has one third of the gold.
Now, while one of us is not looking, the other takes one of the bags and divide it into 3 bags. Now there are 5 bags of gold. The principle of indifference now says each bag has one fifth of the gold. A bag that was estimated to have one third of the gold is now estimated to have one fifth of the gold.
Taken as a value associated with the bag the values are different therefore contradictory. But taken as an estimate given under a particular scenario, both values are separate estimates given under different circumstances and there is no reason to believe they are equal.
Estimates of prior probabilities are particularly suspect. Estimates will be constructed that do not follow any consistent frequency distribution. For this reason prior probabilities are considered as estimates of probabilities rather than probabilities.
A full theoretical treatment would associate with each probability,
Inductive probability combines two different approaches to probability.
Each approach gives a slightly different viewpoint. Information theory is used in relating probabilities to quantities of information. This approach is often used in giving estimates of prior probabilities.
Frequentist probabilitydefines probabilities as objective statements about how often an event occurs. This approach may be stretched by defining thetrialsto be overpossible worlds. Statements about possible worlds defineevents.
Whereas logic represents only two values; true and false as the values of statement, probability associates a number in [0,1] to each statement. If the probability of a statement is 0, the statement is false. If the probability of a statement is 1 the statement is true.
In considering some data as a string of bits the prior probabilities for a sequence of 1s and 0s, the probability of 1 and 0 is equal. Therefore, each extra bit halves the probability of a sequence of bits.
This leads to the conclusion that,
WhereP(x){\displaystyle P(x)}is the probability of the string of bitsx{\displaystyle x}andL(x){\displaystyle L(x)}is its length.
The prior probability of any statement is calculated from the number of bits needed to state it. See alsoinformation theory.
Two statementsA{\displaystyle A}andB{\displaystyle B}may be represented by two separate encodings. Then the length of the encoding is,
or in terms of probability,
But this law is not always true because there may be a shorter method of encodingB{\displaystyle B}if we assumeA{\displaystyle A}. So the above probability law applies only ifA{\displaystyle A}andB{\displaystyle B}are "independent".
The primary use of the information approach to probability is to provide estimates of the complexity of statements. Recall that Occam's razor states that "All things being equal, the simplest theory is the most likely to be correct". In order to apply this rule, first there needs to be a definition of what "simplest" means. Information theory defines simplest to mean having the shortest encoding.
Knowledge is represented asstatements. Each statement is aBooleanexpression. Expressions are encoded by a function that takes a description (as against the value) of the expression and encodes it as a bit string.
The length of the encoding of a statement gives an estimate of the probability of a statement. This probability estimate will often be used as the prior probability of a statement.
Technically this estimate is not a probability because it is not constructed from a frequency distribution. The probability estimates given by it do not always obeythe law of total of probability. Applying the law of total probability to various scenarios will usually give a more accurate probability estimate of the prior probability than the estimate from the length of the statement.
An expression is constructed from sub expressions,
AHuffman codemust distinguish the 3 cases. The length of each code is based on the frequency of each type of sub expressions.
Initially constants are all assigned the same length/probability. Later constants may be assigned a probability using the Huffman code based on the number of uses of the function id in all expressions recorded so far. In using a Huffman code the goal is to estimate probabilities, not to compress the data.
The length of a function application is the length of the function identifier constant plus the sum of the sizes of the expressions for each parameter.
The length of a quantifier is the length of the expression being quantified over.
No explicit representation of natural numbers is given. However natural numbers may be constructed by applying the successor function to 0, and then applying other arithmetic functions. A distribution of natural numbers is implied by this, based on the complexity of constructing each number.
Rational numbers are constructed by the division of natural numbers. The simplest representation has no common factors between the numerator and the denominator. This allows the probability distribution of natural numbers may be extended to rational numbers.
The probability of aneventmay be interpreted as the frequencies ofoutcomeswhere the statement is true divided by the total number of outcomes. If the outcomes form a continuum the frequency may need to be replaced with ameasure.
Events are sets of outcomes. Statements may be related to events. A Boolean statement B about outcomes defines a set of outcomes b,
Each probability is always associated with the state of knowledge at a particular point in the argument. Probabilities before an inference are known as prior probabilities, and probabilities after are known as posterior probabilities.
Probability depends on the facts known. The truth of a fact limits the domain of outcomes to the outcomes consistent with the fact. Prior probabilities are the probabilities before a fact is known. Posterior probabilities are after a fact is known. The posterior probabilities are said to be conditional on the fact. the probability thatB{\displaystyle B}is true given thatA{\displaystyle A}is true is written as:P(B|A).{\displaystyle P(B|A).}
All probabilities are in some sense conditional. The prior probability ofB{\displaystyle B}is,
In thefrequentist approach, probabilities are defined as the ratio of the number ofoutcomeswithin an event to the total number of outcomes. In thepossible worldmodel each possible world is an outcome, and statements about possible worlds define events. The probability of a statement being true is the number of possible worlds where the statement is true divided by the total number of possible worlds. The probability of a statementA{\displaystyle A}being true about possible worlds is then,
For a conditional probability.
then
Using symmetry this equation may be written out as Bayes' law.
This law describes the relationship between prior and posterior probabilities when new facts are learnt.
Written as quantities of informationBayes' Theorembecomes,
Two statements A and B are said to be independent if knowing the truth of A does not change the probability of B. Mathematically this is,
thenBayes' Theoremreduces to,
For a set of mutually exclusive possibilitiesAi{\displaystyle A_{i}}, the sum of the posterior probabilities must be 1.
Substituting using Bayes' theorem gives thelaw of total probability
This result is used to give theextended form of Bayes' theorem,
This is the usual form of Bayes' theorem used in practice, because it guarantees the sum of all the posterior probabilities forAi{\displaystyle A_{i}}is 1.
For mutually exclusive possibilities, the probabilities add.
Using
Then the alternatives
are all mutually exclusive. Also,
so, putting it all together,
As,
then
Implication is related to conditional probability by the following equation,
Derivation,
Bayes' theorem may be used to estimate the probability of a hypothesis or theory H, given some facts F. The posterior probability of H is then
or in terms of information,
By assuming the hypothesis is true, a simpler representation of the statement F may be given. The length of the encoding of this simpler representation isL(F|H).{\displaystyle L(F|H).}
L(H)+L(F|H){\displaystyle L(H)+L(F|H)}represents the amount of information needed to represent the facts F, if H is true.L(F){\displaystyle L(F)}is the amount of information needed to represent F without the hypothesis H. The difference is how much the representation of the facts has been compressed by assuming that H is true. This is the evidence that the hypothesis H is true.
IfL(F){\displaystyle L(F)}is estimated fromencoding lengththen the probability obtained will not be between 0 and 1. The value obtained is proportional to the probability, without being a good probability estimate. The number obtained is sometimes referred to as a relative probability, being how much more probable the theory is than not holding the theory.
If a full set of mutually exclusive hypothesis that provide evidence is known, a proper estimate may be given for the prior probabilityP(F){\displaystyle P(F)}.
Probabilities may be calculated from the extended form of Bayes' theorem. Given all mutually exclusive hypothesisHi{\displaystyle H_{i}}which give evidence, such that,
and also the hypothesis R, that none of the hypothesis is true, then,
In terms of information,
In most situations it is a good approximation to assume thatF{\displaystyle F}is independent ofR{\displaystyle R}, which meansP(F|R)=P(F){\displaystyle P(F|R)=P(F)}giving,
Abductive inference[11][12][13][14]starts with a set of factsFwhich is a statement (Boolean expression).Abductive reasoningis of the form,
The theoryT, also called an explanation of the conditionF, is an answer to the ubiquitous factual "why" question. For example, for the conditionFis "Why do apples fall?". The answer is a theoryTthat implies that apples fall;
Inductive inference is of the form,
In terms of abductive inference,all objects in a class C or set have a property Pis a theory that implies the observed condition,All observed objects in a class C have a property P.
Soinductive inferenceis a general case of abductive inference. In common usage the term inductive inference is often used to refer to both abductive and inductive inference.
Inductive inference is related togeneralization. Generalizations may be formed from statements by replacing a specific value with membership of a category, or by replacing membership of a category with membership of a broader category. In deductive logic, generalization is a powerful method of generating new theories that may be true. In inductive inference generalization generates theories that have a probability of being true.
The opposite of generalization is specialization. Specialization is used in applying a general rule to a specific case. Specializations are created from generalizations by replacing membership of a category by a specific value, or by replacing a category with a sub category.
TheLinnaenclassification of living things and objects forms the basis for generalization and specification. The ability to identify, recognize and classify is the basis for generalization. Perceiving the world as a collection of objects appears to be a key aspect of human intelligence. It is the object oriented model, in the noncomputer sciencesense.
The object oriented model is constructed from ourperception. In particularlyvisionis based on the ability to compare two images and calculate how much information is needed to morph or map one image into another.Computer visionuses this mapping to construct 3D images fromstereo image pairs.
Inductive logic programmingis a means of constructing theory that implies a condition. Plotkin's[15][16]"relative least general generalization (rlgg)" approach constructs the simplest generalization consistent with the condition.
Isaac Newtonused inductive arguments in constructing hislaw of universal gravitation.[17]Starting with the statement,
Generalizing by replacing apple for object, and Earth for object gives, in a two body system,
The theory explains all objects falling, so there is strong evidence for it. The second observation,
After some complicated mathematicalcalculus, it can be seen that if the acceleration follows the inverse square law then objects will follow an ellipse. So induction gives evidence for the inverse square law.
UsingGalileo'sobservation that all objects drop with the same speed,
wherei1{\displaystyle i_{1}}andi2{\displaystyle i_{2}}vectors towards the center of the other object. Then usingNewton's third lawF1=−F2{\displaystyle F_{1}=-F_{2}}
Implication determines condition probabilityas,
So,
This result may be used in the probabilities given for Bayesian hypothesis testing. For a single theory, H = T and,
or in terms of information, the relative probability is,
Note that this estimate for P(T|F) is not a true probability. IfL(Ti)<L(F){\displaystyle L(T_{i})<L(F)}then the theory has evidence to support it. Then for a set of theoriesTi=Hi{\displaystyle T_{i}=H_{i}}, such thatL(Ti)<L(F){\displaystyle L(T_{i})<L(F)},
giving,
Make a list of all the shortest programsKi{\displaystyle K_{i}}that each produce a distinct infinite string of bits, and satisfy the relation,
whereR(Ki){\displaystyle R(K_{i})}is the result of running the programKi{\displaystyle K_{i}}andTn{\displaystyle T_{n}}truncates the string afternbits.
The problem is to calculate the probability that the source is produced by programKi,{\displaystyle K_{i},}given that the truncated source after n bits isx. This is represented by the conditional probability,
Using theextended form of Bayes' theorem
The extended form relies on thelaw of total probability. This means that thes=R(Ki){\displaystyle s=R(K_{i})}must be distinct possibilities, which is given by the condition that eachKi{\displaystyle K_{i}}produce a different infinite string. Also one of the conditionss=R(Ki){\displaystyle s=R(K_{i})}must be true. This must be true, as in the limit asn→∞,{\displaystyle n\to \infty ,}there is always at least one program that producesTn(s){\displaystyle T_{n}(s)}.
AsKi{\displaystyle K_{i}}are chosen so thatTn(R(Ki))=x,{\displaystyle T_{n}(R(K_{i}))=x,}then,
The apriori probability of the string being produced from the program, given no information about the string, is based on the size of the program,
giving,
Programs that are the same or longer than the length ofxprovide no predictive power. Separate them out giving,
Then identify the two probabilities as,
But the prior probability thatxis a random set of bits is2−n{\displaystyle 2^{-n}}. So,
The probability that the source is random, or unpredictable is,
A model of how worlds are constructed is used in determining the probabilities of theories,
Ifwis the bit string then the world is created such thatR(w){\displaystyle R(w)}is true. Anintelligent agenthas some facts about the word, represented by the bit stringc, which gives the condition,
The set of bit strings identical with any conditionxisE(x){\displaystyle E(x)}.
A theory is a simpler condition that explains (or implies)C. The set of all such theories is calledT,
extended form of Bayes' theoremmay be applied
where,
To apply Bayes' theorem the following must hold:Ai{\displaystyle A_{i}}is apartitionof the event space.
ForT(C){\displaystyle T(C)}to be a partition, no bit stringnmay belong to two theories. To prove this assume they can and derive a contradiction,
Secondly prove thatTincludes all outcomes consistent with the condition. As all theories consistent withCare included thenR(w){\displaystyle R(w)}must be in this set.
So Bayes theorem may be applied as specified giving,
Using theimplication and condition probability law, the definition ofT(C){\displaystyle T(C)}implies,
The probability of each theory inTis given by,
so,
Finally the probabilities of the events may be identified with the probabilities of the condition which the outcomes in the event satisfy,
giving
This is the probability of the theorytafter observing that the conditionCholds.
Theories that are less probable than the conditionChave no predictive power. Separate them out giving,
The probability of the theories without predictive power onCis the same as the probability ofC. So,
So the probability
and the probability of no prediction for C, written asrandom(C){\displaystyle \operatorname {random} (C)},
The probability of a condition was given as,
Bit strings for theories that are more complex than the bit string given to the agent as input have no predictive power. There probabilities are better included in therandomcase. To implement this a new definition is given asFin,
UsingF, an improved version of the abductive probabilities is,
|
https://en.wikipedia.org/wiki/Inductive_probability
|
Authentication(fromGreek:αὐθεντικόςauthentikos, "real, genuine", from αὐθέντηςauthentes, "author") is the act of proving anassertion, such as theidentityof a computer system user. In contrast withidentification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity.[1][2]
Authentication is relevant to multiple fields. Inart,antiques, andanthropology, a common problem is verifying that a given artifact was produced by a certain person, or in a certain place (i.e. to assert that it is notcounterfeit), or in a given period of history (e.g. by determining the age viacarbon dating). Incomputer science, verifying a user's identity is often required to allow access to confidential data or systems.[3]It might involve validating personalidentity documents.
Authentication can be considered to be of three types:
Thefirsttype of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member, or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sportsmemorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while they may not have evidence that every step in the supply chain was authenticated.
Thesecondtype of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. Anarchaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical andspectroscopicanalysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.
Attribute comparison may be vulnerable toforgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.
In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son ofHan van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well.
Criminal and civil penalties forfraud,forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.
Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, andchequesincorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.
Thethirdtype of authentication relies on documentation or other external affirmations. In criminal courts, therules of evidenceoften require establishing thechain of custodyof evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery andperjuryand are also vulnerable to being separated from the artifact and lost.
Consumer goodssuch as pharmaceuticals,[4]perfume, and clothing can use all forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation. As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of atrademarkon the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.[5]
Counterfeit products are often offered to consumers as being authentic.Counterfeit consumer goods, such as electronics, music, apparel, andcounterfeit medications, have been sold as being legitimate. Efforts to control thesupply chainand educate consumers help ensure that authentic products are sold and used. Evensecurity printingon packages, labels, and nameplates, however, is subject to counterfeiting.[6]
In their anti-counterfeiting technology guide,[7]theEUIPOObservatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media.[8]
Products or their packaging can include a variableQR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits unless scan data is analyzed at the system level to detect anomalies.[9]To increase the security level, the QR Code can be combined with adigital watermarkorcopy detection patternthat are robust to copy attempts and can be authenticated with a smartphone.
Asecure key storage devicecan be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally, the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.[2]
Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products.[10][11]Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and usesecurity printingto help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs,RFIDtags, orelectronic article surveillance[12]tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:
In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity in which students document the relevant research process.[13]It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period.[3]Literary forgerycan involve imitating the style of a famous author. If an originalmanuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box toe-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:
The opposite problem is the detection ofplagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.
The process of authentication is distinct from that ofauthorization. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set.[14]Similarly, the establishment of the authorization can occur long before theauthorizationdecision occurs.
A user can be given access to secure systems based on user credentials that imply authenticity.[15]A network administrator can give a user apassword, or provide the user with a key card or other access devices to allow system access. In this case, authenticity is implied but not guaranteed.
Most secure internet communication relies on centralized authority-based trust relationships, such as those used inHTTPS, where publiccertificate authorities(CAs) vouch for the authenticity of websites. This same centralized trust model underpins protocols like OIDC (OpenID Connect) where identity providers (e.g.,Google) authenticate users on behalf of relying applications. In contrast, decentralized peer-based trust, also known as aweb of trust, is commonly used for personal services such as secure email or file sharing. In systems likePGP, trust is established when individuals personally verify and sign each other’s cryptographic keys, without relying on a central authority.
These systems usecryptographicprotocolsthat, in theory, are not vulnerable tospoofingas long as the originator’s private key remains uncompromised. Importantly, even if the key owner is unaware of a compromise, the cryptographic failure still invalidates trust. However, while these methods are currently considered secure, they are not provably unbreakable—future mathematical or computational advances (such asquantum computingor new algorithmic attacks) could expose vulnerabilities. If that happens, it could retroactively undermine trust in past communications or agreements. For example, adigitally signedcontractmight be challenged if the signature algorithm is later found to be insecure..[citation needed]
The ways in which someone may be authenticated fall into three categories, based on what is known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity before being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.
Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified.[16][17]The three factors (classes) and some of the elements of each factor are:
As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual's identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security.[21]
Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors.[21]
For example, using a bank card (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require amantrapscreening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements),[22]but this is still a two-factor authentication.
The United States government'sNational Information Assurance Glossarydefines strong authentication as a layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.[23]
The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements.[21][24]
TheFIDO Alliancehas been striving to establish technical specifications for strong authentication.[25]
Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based on writing styles as a continuous authentication method.[26][27]
Recent research has shown the possibility of using smartphones sensors and accessories to extract some behavioral attributes such as touch dynamics,keystroke dynamicsandgait recognition.[28]These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems.[29][27]
The term digital authentication, also known aselectronic authenticationor e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network.
The AmericanNational Institute of Standards and Technology(NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication:
The authentication of information can pose special problems with electronic communication, such as vulnerability toman-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.
|
https://en.wikipedia.org/wiki/Authentication
|
ZRTP(composed of Z andReal-time Transport Protocol) is a cryptographickey-agreement protocolto negotiate thekeysforencryptionbetween two end points in aVoice over IP(VoIP) phone telephony call based on theReal-time Transport Protocol. It usesDiffie–Hellman key exchangeand theSecure Real-time Transport Protocol(SRTP) for encryption. ZRTP was developed byPhil Zimmermann, with help fromBryce Wilcox-O'Hearn, Colin Plumb,Jon Callasand Alan Johnston and was submitted to theInternet Engineering Task Force(IETF) by Zimmermann, Callas and Johnston on March 5, 2006 and published on April 11, 2011 asRFC6189.
ZRTP ("Z" is a reference to its inventor, Zimmermann; "RTP" stands for Real-time Transport Protocol)[1]is described in theInternet Draftas a"key agreement protocol which performs Diffie–Hellman key exchange during call setup in-band in the Real-time Transport Protocol (RTP) media stream which has been established using some other signaling protocol such asSession Initiation Protocol(SIP). This generates a shared secret which is then used to generate keys and salt for a Secure RTP (SRTP) session."One of ZRTP's features is that it does not rely on SIP signaling for the key management, or on any servers at all. It supportsopportunistic encryptionby auto-sensing if the other VoIP client supports ZRTP.
This protocol does not require prior shared secrets or rely on aPublic key infrastructure(PKI) or on certification authorities, in fact ephemeral Diffie–Hellman keys are generated on each session establishment: this allows the complexity of creating and maintaining a trusted third-party to be bypassed.
These keys contribute to the generation of the session secret, from which the session key and parameters for SRTP sessions are derived, along with previously shared secrets (if any): this gives protection againstman-in-the-middle (MiTM) attacks, so long as the attacker was not present in the first session between the two endpoints.
ZRTP can be used with any signaling protocol, including SIP,H.323,Jingle, anddistributed hash tablesystems. ZRTP is independent of the signaling layer, because all its key negotiations occur via the RTP media stream.
ZRTP/S, a ZRTP protocol extension, can run on any kind of legacy telephony networks including GSM, UMTS, ISDN, PSTN,SATCOM,UHF/VHFradio, because it is a narrow-band bitstream-oriented protocol and performs all key negotiations inside the bitstream between two endpoints.
Alan Johnston named the protocol ZRTP because in its earliest Internet drafts it was based on adding header extensions to RTP packets, which made ZRTP a variant of RTP. In later drafts the packet format changed to make it syntactically distinguishable from RTP. In view of that change, ZRTP is now apseudo-acronym.
TheDiffie–Hellman key exchangeby itself does not provide protection against a man-in-the-middle attack. To ensure that the attacker is indeed not present in the first session (when no shared secrets exist), theShort Authentication String(SAS) method is used: the communicating parties verbally cross-check a shared value displayed at both endpoints. If the values do not match, a man-in-the-middle attack is indicated. A specific attack theorized against the ZRTP protocol involves creating a synthetic voice of both parties to read a bogus SAS which is known as a "Rich Littleattack", but this class of attack is not believed to be a serious risk to the protocol's security.[2]The SAS is used to authenticate the key exchange, which is essentially acryptographic hashof the two Diffie–Hellman values. The SAS value is rendered to both ZRTP endpoints. To carry out authentication, this SAS value is read aloud to the communication partner over the voice connection. If the values on both ends do not match, a man-in-middle attack is indicated; if they do match, a man-in-the-middle attack is highly unlikely. The use of hash commitment in the DH exchange constrains the attacker to only one guess to generate the correct SAS in the attack, which means the SAS may be quite short. A 16-bit SAS, for example, provides the attacker only one chance out of 65536 of not being detected.
ZRTP provides a second layer of authentication against a MitM attack, based on a form of key continuity. It does this by caching some hashed key information for use in the next call, to be mixed in with the next call's DH shared secret, giving it key continuity properties analogous toSSH. If the MitM is not present in the first call, he is locked out of subsequent calls. Thus, even if the SAS is never used, most MitM attacks are stopped because the MitM was not present in the first call.
ZRTP has been implemented as
Commercial implementations of ZRTP are available in RokaCom from RokaCom,[13]and PrivateWave Professional from PrivateWave[14]and more recently in Silent Phone from Silent Circle, a company founded by Zimmermann.[15]There is also Softphone from Acrobits.[16]Drayteksupport ZRTP in some of their VoIP hardware and software.[17][18]
A list of free SIP Providers with ZRTP support has been published.[11]
|
https://en.wikipedia.org/wiki/ZRTP
|
In the area ofmathematical logicandcomputer scienceknown astype theory, aunit typeis atypethat allows only one value (and thus can hold no information). The carrier (underlying set) associated with a unit type can be anysingleton set. There is anisomorphismbetween any two such sets, so it is customary to talk abouttheunit type and ignore the details of its value. One may also regard the unit type as the type of 0-tuples, i.e. theproductof no types.
The unit type is theterminal objectin thecategoryof types and typed functions. It should not be confused with thezeroorempty type, which allowsnovalues and is theinitial objectin this category. Similarly, theBooleanis the type withtwovalues.
The unit type is implemented in mostfunctional programminglanguages. Thevoid typethat is used in some imperative programming languages serves some of its functions, but because its carrier set is empty, it has some limitations (as detailed below).
Several computerprogramming languagesprovide a unit type to specify the result type of afunctionwith the sole purpose of causing aside effect, and the argument type of a function that does not require arguments.
InC,C++,C#,D, andPHP,voidis used to designate a function that does not return anything useful, or a function that accepts no arguments. The unit type in C is conceptually similar to an emptystruct, but a struct without members is not allowed in the C language specification (this is allowed in C++). Instead, 'void' is used in a manner that simulates some, but not all, of the properties of the unit type, as detailed below. Like most imperative languages, C allows functions that do not return a value; these are specified as having the void return type. Such functions are called procedures in other imperative languages likePascal, where a syntactic distinction, instead of type-system distinction, is made between functions and procedures.
The first notable difference between a true unit type and the void type is that the unit type may always be the type of the argument to a function, but the void type cannot be the type of an argument in C, despite the fact that it may appear as the sole argument in the list. This problem is best illustrated by the following program, which is a compile-time error in C:
This issue does not arise in most programming practice in C, because since thevoidtype carries no information, it is useless to pass it anyway; but it may arise ingeneric programming, such as C++templates, wherevoidmust be treated differently from other types. In C++ however, empty classes are allowed, so it is possible to implement a real unit type; the above example becomes compilable as:
(For brevity, we're not worried in the above example whetherthe_unitis really asingleton; seesingleton patternfor details on that issue.)
The second notable difference is that the void type is special and can never be stored in arecord type, i.e. in a struct or a class in C/C++. In contrast, the unit type can be stored in records in functional programming languages, i.e. it can appear as the type of a field; the above implementation of the unit type in C++ can also be stored. While this may seem a useless feature, it does allow one for instance to elegantly implement asetas amapto the unit type; in the absence of a unit type, one can still implement a set this way by storing some dummy value of another type for each key.
In Java Generics, type parameters must be reference types. The wrapper typeVoidis often used when a unit type parameter is needed. Although theVoidtype can never have any instances, it does have one value,null(like all other reference types), so it acts as a unit type. In practice, any other non-instantiable type, e.g.Math, can also be used for this purpose, since they also have exactly one value,null.
Statically typed languages give a type to every possible expression. They need to associate a type to thenullexpression.
A type will be defined fornulland it will only have this value.
For example in D, it's possible to declare functions that may only returnnull:
nullis the only value thattypeof(null), a unit type, can have.
|
https://en.wikipedia.org/wiki/Unit_type
|
Incryptography, abrute-force attackconsists of an attacker submitting manypasswordsorpassphraseswith the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess thekeywhich is typically created from the password using akey derivation function. This is known as anexhaustive key search. This approach doesn't depend on intellectual tactics; rather, it relies on making several attempts.[citation needed]
A brute-force attack is acryptanalytic attackthat can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in aninformation-theoretically securemanner).[1]Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier.
When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as thedictionary attackare used because a brute-force search takes too long. Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones due to diversity of characters.[2]
Brute-force attacks can be made less effective byobfuscatingthe data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.[3]
Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack,[4]with 'anti-hammering' for countermeasures.[5]
Brute-force attacks work by calculating every possible combination that could make up a password and testing it to see if it is the correct password. As the password's length increases, the amount of time, on average, to find the correct password increases exponentially.[6]
The resources required for a brute-force attack growexponentiallywith increasingkey size, not linearly. Although U.S. export regulations historically restricted key lengths to 56-bitsymmetric keys(e.g.Data Encryption Standard), these restrictions are no longer in place, so modern symmetric algorithms typically use computationally stronger 128- to 256-bit keys.
There is a physical argument that a 128-bit symmetric key is computationally secure against brute-force attack. TheLandauer limitimplied by the laws of physics sets a lower limit on the energy required to perform a computation ofkT·ln 2per bit erased in a computation, whereTis the temperature of the computing device inkelvins,kis theBoltzmann constant, and thenatural logarithmof 2 is about 0.693 (0.6931471805599453). No irreversible computing device can use less energy than this, even in principle.[7]Thus, in order to simply flip through the possible values for a 128-bit symmetric key (ignoring doing the actual computing to check it) would, theoretically, require2128− 1bit flips on a conventional processor. If it is assumed that the calculation occurs near room temperature (≈300 K), the Von Neumann-Landauer Limit can be applied to estimate the energy required as ≈1018joules, which is equivalent to consuming 30gigawattsof power for one year. This is equal to 30×109W×365×24×3600 s = 9.46×1017J or 262.7 TWh (about 0.1% of theyearly world energy production). The full actual computation – checking each key to see if a solution has been found – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space; the actual time it takes to flip each bit is not considered, which is certainly greater than 0 (seeBremermann's limit).[citation needed]
However, this argument assumes that the register values are changed using conventional set and clear operations, which inevitably generateentropy. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction (seereversible computing), though no such computers are known to have been constructed.[citation needed]
As commercial successors of governmentalASICsolutions have become available, also known ascustom hardware attacks, two emerging technologies have proven their capability in the brute-force attack of certain ciphers. One is moderngraphics processing unit(GPU) technology,[8][page needed]the other is thefield-programmable gate array(FPGA) technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from theirenergy efficiencyper cryptographic operation. Both technologies try to transport the benefits of parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some thousand processing units making them much better suited to cracking passwords than conventional processors. For instance in 2022, 8Nvidia RTX 4090GPU were linked together to test password strength by using the softwareHashcatwith results that showed 200 billion eight-characterNTLMpassword combinations could be cycled through in 48 minutes.[9][10]
Various publications in the fields of cryptographic analysis have proved the energy efficiency of today's FPGA technology, for example, the COPACOBANA FPGA Cluster computer consumes the same energy as a single PC (600 W), but performs like 2,500 PCs for certain algorithms. A number of firms provide hardware-based FPGA cryptographic analysis solutions from a single FPGAPCI Expresscard up to dedicated FPGA computers.[citation needed]WPAandWPA2encryption have successfully been brute-force attacked by reducing the workload by a factor of 50 in comparison to conventional CPUs[11][12]and some hundred in case of FPGAs.
Advanced Encryption Standard(AES) permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute-force requires 2128times more computational power than a 128-bit key. One of the fastest supercomputers in 2019 has a speed of 100petaFLOPSwhich could theoretically check 100 trillion (1014) AES keys per second (assuming 1000 operations per check), but would still require 3.67×1055years to exhaust the 256-bit key space.[13]
An underlying assumption of a brute-force attack is that the complete key space was used to generate keys, something that relies on an effectiverandom number generator, and that there are no defects in the algorithm or its implementation. For example, a number of systems that were originally thought to be impossible to crack by brute-force have nevertheless beencrackedbecause thekey spaceto search through was found to be much smaller than originally thought, because of a lack of entropy in theirpseudorandom number generators. These includeNetscape's implementation ofSecure Sockets Layer(SSL) (cracked byIan GoldbergandDavid Wagnerin 1995) and aDebian/Ubuntuedition ofOpenSSLdiscovered in 2008 to be flawed.[14][15]A similar lack of implemented entropy led to the breaking ofEnigma'scode.[16][17]
Credential recycling is thehackingpractice of re-using username and password combinations gathered in previous brute-force attacks. A special form of credential recycling ispass the hash, whereunsaltedhashed credentials are stolen and re-used without first being brute-forced.[18]
Certain types of encryption, by their mathematical properties, cannot be defeated by brute-force. An example of this isone-time padcryptography, where everycleartextbit has a corresponding key from a truly random sequence of key bits. A 140 character one-time-pad-encoded string subjected to a brute-force attack would eventually reveal every 140 character string possible, including the correct answer – but of all the answers given, there would be no way of knowing which was the correct one. Defeating such a system, as was done by theVenona project, generally relies not on pure cryptography, but upon mistakes in its implementation, such as the key pads not being truly random, intercepted keypads, or operators making mistakes.[19]
In case of anofflineattack where the attacker has gained access to the encrypted material, one can try key combinations without the risk of discovery or interference. In case ofonlineattacks, database and directory administrators can deploy countermeasures such as limiting the number of attempts that a password can be tried, introducing time delays between successive attempts, increasing the answer's complexity (e.g., requiring aCAPTCHAanswer or employingmulti-factor authentication), and/or locking accounts out after unsuccessful login attempts.[20][page needed]Website administrators may prevent a particular IP address from trying more than a predetermined number of password attempts against any account on the site.[21]Additionally, the MITRE D3FEND framework provides structured recommendations for defending against brute-force attacks by implementing strategies such as network traffic filtering, deploying decoy credentials, and invalidating authentication caches.[22]
In a reverse brute-force attack (also called password spraying), a single (usually common) password is tested against multiple usernames or encrypted files.[23]The process may be repeated for a select few passwords. In such a strategy, the attacker is not targeting a specific user.
|
https://en.wikipedia.org/wiki/Brute_force_attack
|
The followingoutlineis provided as an overview of and topical guide to software:
Software– collection ofcomputer programsand relateddatathat provides the information for the functioning of acomputer. It is held in various forms ofmemoryof the computer. It comprises procedures, algorithms, and documentation concerned with the operation of a data processing system. The term was coined to contrast to the term hardware, meaning physical devices. In contrast to hardware, software "cannot be touched".[1]Software is also sometimes used in a more narrow sense, meaningapplication softwareonly. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records.[2]
Software development entails the establishment of asystems development life cycleof a software product. It encompasses a planned and structured process from the conception of the desired software to its final manifestation,[4]which constitutescomputer programming, the process of writing and maintaining thesource code. Software development includes research, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.[5]
Software distribution–
|
https://en.wikipedia.org/wiki/Outline_of_software
|
Thegrowth function, also called theshatter coefficientor theshattering number, measures the richness of aset familyor class of functions. It is especially used in the context ofstatistical learning theory, where it is used to study properties of statistical learning methods.
The term 'growth function' was coined by Vapnik and Chervonenkis in their 1968 paper, where they also proved many of its properties.[1]It is a basic concept inmachine learning.[2][3]
LetH{\displaystyle H}be aset family(a set of sets) andC{\displaystyle C}a set. Theirintersectionis defined as the following set-family:
Theintersection-size(also called theindex) ofH{\displaystyle H}with respect toC{\displaystyle C}is|H∩C|{\displaystyle |H\cap C|}. If a setCm{\displaystyle C_{m}}hasm{\displaystyle m}elements then the index is at most2m{\displaystyle 2^{m}}. If the index is exactly 2mthen the setC{\displaystyle C}is said to beshatteredbyH{\displaystyle H}, becauseH∩C{\displaystyle H\cap C}contains all the subsets ofC{\displaystyle C}, i.e.:
The growth function measures the size ofH∩C{\displaystyle H\cap C}as a function of|C|{\displaystyle |C|}. Formally:
Equivalently, letH{\displaystyle H}be a hypothesis-class (a set of binary functions) andC{\displaystyle C}a set withm{\displaystyle m}elements. TherestrictionofH{\displaystyle H}toC{\displaystyle C}is the set of binary functions onC{\displaystyle C}that can be derived fromH{\displaystyle H}:[3]: 45
The growth function measures the size ofHC{\displaystyle H_{C}}as a function of|C|{\displaystyle |C|}:[3]: 49
1.The domain is the real lineR{\displaystyle \mathbb {R} }.
The set-familyH{\displaystyle H}contains all thehalf-lines(rays) from a given number to positive infinity, i.e., all sets of the form{x>x0∣x∈R}{\displaystyle \{x>x_{0}\mid x\in \mathbb {R} \}}for somex0∈R{\displaystyle x_{0}\in \mathbb {R} }.
For any setC{\displaystyle C}ofm{\displaystyle m}real numbers, the intersectionH∩C{\displaystyle H\cap C}containsm+1{\displaystyle m+1}sets: the empty set, the set containing the largest element ofC{\displaystyle C}, the set containing the two largest elements ofC{\displaystyle C}, and so on. Therefore:Growth(H,m)=m+1{\displaystyle \operatorname {Growth} (H,m)=m+1}.[1]: Ex.1The same is true whetherH{\displaystyle H}contains open half-lines, closed half-lines, or both.
2.The domain is the segment[0,1]{\displaystyle [0,1]}.
The set-familyH{\displaystyle H}contains all the open sets.
For any finite setC{\displaystyle C}ofm{\displaystyle m}real numbers, the intersectionH∩C{\displaystyle H\cap C}contains all possible subsets ofC{\displaystyle C}. There are2m{\displaystyle 2^{m}}such subsets, soGrowth(H,m)=2m{\displaystyle \operatorname {Growth} (H,m)=2^{m}}.[1]: Ex.2
3.The domain is the Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}.
The set-familyH{\displaystyle H}contains all thehalf-spacesof the form:x⋅ϕ≥1{\displaystyle x\cdot \phi \geq 1}, whereϕ{\displaystyle \phi }is a fixed vector.
ThenGrowth(H,m)=Comp(n,m){\displaystyle \operatorname {Growth} (H,m)=\operatorname {Comp} (n,m)},
where Comp is thenumber of components in a partitioning of an n-dimensional space by m hyperplanes.[1]: Ex.3
4.The domain is the real lineR{\displaystyle \mathbb {R} }. The set-familyH{\displaystyle H}contains all the real intervals, i.e., all sets of the form{x∈[x0,x1]|x∈R}{\displaystyle \{x\in [x_{0},x_{1}]|x\in \mathbb {R} \}}for somex0,x1∈R{\displaystyle x_{0},x_{1}\in \mathbb {R} }. For any setC{\displaystyle C}ofm{\displaystyle m}real numbers, the intersectionH∩C{\displaystyle H\cap C}contains all runs of between 0 andm{\displaystyle m}consecutive elements ofC{\displaystyle C}. The number of such runs is(m+12)+1{\displaystyle {m+1 \choose 2}+1}, soGrowth(H,m)=(m+12)+1{\displaystyle \operatorname {Growth} (H,m)={m+1 \choose 2}+1}.
The main property that makes the growth function interesting is that it can be either polynomial or exponential - nothing in-between.
The following is a property of the intersection-size:[1]: Lem.1
This implies the following property of the Growth function.[1]: Th.1For every familyH{\displaystyle H}there are two cases:
For any finiteH{\displaystyle H}:
since for everyC{\displaystyle C}, the number of elements inH∩C{\displaystyle H\cap C}is at most|H|{\displaystyle |H|}. Therefore, the growth function is mainly interesting whenH{\displaystyle H}is infinite.
For any nonemptyH{\displaystyle H}:
I.e, the growth function has an exponential upper-bound.
We say that a set-familyH{\displaystyle H}shattersa setC{\displaystyle C}if their intersection contains all possible subsets ofC{\displaystyle C}, i.e.H∩C=2C{\displaystyle H\cap C=2^{C}}.
IfH{\displaystyle H}shattersC{\displaystyle C}of sizem{\displaystyle m}, thenGrowth(H,C)=2m{\displaystyle \operatorname {Growth} (H,C)=2^{m}}, which is the upper bound.
Define the Cartesian intersection of two set-families as:
Then:[2]: 57
For every two set-families:[2]: 58
TheVC dimensionofH{\displaystyle H}is defined according to these two cases:
SoVCDim(H)≥d{\displaystyle \operatorname {VCDim} (H)\geq d}if-and-only-ifGrowth(H,d)=2d{\displaystyle \operatorname {Growth} (H,d)=2^{d}}.
The growth function can be regarded as a refinement of the concept of VC dimension. The VC dimension only tells us whetherGrowth(H,d){\displaystyle \operatorname {Growth} (H,d)}is equal to or smaller than2d{\displaystyle 2^{d}}, while the growth function tells us exactly howGrowth(H,m){\displaystyle \operatorname {Growth} (H,m)}changes as a function ofm{\displaystyle m}.
Another connection between the growth function and the VC dimension is given by theSauer–Shelah lemma:[3]: 49
In particular,
This upper bound is tight, i.e., for allm>d{\displaystyle m>d}there existsH{\displaystyle H}with VC dimensiond{\displaystyle d}such that:[2]: 56
While the growth-function is related to themaximumintersection-size,
theentropyis related to theaverageintersection size:[1]: 272–273
The intersection-size has the following property. For every set-familyH{\displaystyle H}:
Hence:
Moreover, the sequenceEntropy(H,m)/m{\displaystyle \operatorname {Entropy} (H,m)/m}converges to a constantc∈[0,1]{\displaystyle c\in [0,1]}whenm→∞{\displaystyle m\to \infty }.
Moreover, the random-variablelog2|H∩Cm|/m{\displaystyle \log _{2}{|H\cap C_{m}|/m}}is concentrated nearc{\displaystyle c}.
LetΩ{\displaystyle \Omega }be a set on which aprobability measurePr{\displaystyle \Pr }is defined.
LetH{\displaystyle H}be family of subsets ofΩ{\displaystyle \Omega }(= a family of events).
Suppose we choose a setCm{\displaystyle C_{m}}that containsm{\displaystyle m}elements ofΩ{\displaystyle \Omega },
where each element is chosen at random according to the probability measureP{\displaystyle P}, independently of the others (i.e., with replacements). For each eventh∈H{\displaystyle h\in H}, we compare the following two quantities:
We are interested in the difference,D(h,Cm):=||h∩Cm|/m−Pr[h]|{\displaystyle D(h,C_{m}):={\big |}|h\cap C_{m}|/m-\Pr[h]{\big |}}. This difference satisfies the following upper bound:
which is equivalent to:[1]: Th.2
In words: the probability that forallevents inH{\displaystyle H}, the relative-frequency is near the probability, is lower-bounded by an expression that depends on the growth-function ofH{\displaystyle H}.
A corollary of this is that, if the growth function is polynomial inm{\displaystyle m}(i.e., there exists somen{\displaystyle n}such thatGrowth(H,m)≤mn+1{\displaystyle \operatorname {Growth} (H,m)\leq m^{n}+1}), then the above probability approaches 1 asm→∞{\displaystyle m\to \infty }. I.e, the familyH{\displaystyle H}enjoysuniform convergence in probability.
|
https://en.wikipedia.org/wiki/Growth_function
|
Inmathematics, agerbe(/dʒɜːrb/;French:[ʒɛʁb]) is a construct inhomological algebraandtopology. Gerbes were introduced byJean Giraud(Giraud 1971) following ideas ofAlexandre Grothendieckas a tool for non-commutativecohomologyin degree 2. They can be seen as an analogue offibre bundleswhere the fibre is theclassifying stackof a group. Gerbes provide a convenient, if highly abstract, language for dealing with many types ofdeformationquestions especially in modernalgebraic geometry. In addition, special cases of gerbes have been used more recently indifferential topologyanddifferential geometryto give alternative descriptions to certaincohomology classesand additional structures attached to them.
"Gerbe" is a French (and archaic English) word that literally meanswheatsheaf.
A gerbe on atopological spaceS{\displaystyle S}[1]: 318is astackX{\displaystyle {\mathcal {X}}}ofgroupoidsoverS{\displaystyle S}that islocally non-empty(each pointp∈S{\displaystyle p\in S}has an open neighbourhoodUp{\displaystyle U_{p}}over which thesection categoryX(Up){\displaystyle {\mathcal {X}}(U_{p})}of the gerbe is not empty) andtransitive(for any two objectsa{\displaystyle a}andb{\displaystyle b}ofX(U){\displaystyle {\mathcal {X}}(U)}for any open setU{\displaystyle U}, there is an open coveringU={Ui}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}ofU{\displaystyle U}such that the restrictions ofa{\displaystyle a}andb{\displaystyle b}to eachUi{\displaystyle U_{i}}are connected by at least one morphism).
A canonical example is the gerbeBH{\displaystyle BH}ofprincipal bundleswith a fixedstructure groupH{\displaystyle H}: the section category over an open setU{\displaystyle U}is the category of principalH{\displaystyle H}-bundles onU{\displaystyle U}with isomorphism as morphisms (thus the category is a groupoid). As principal bundles glue together (satisfy the descent condition), these groupoids form a stack. The trivial bundleX×H→X{\displaystyle X\times H\to X}shows that the local non-emptiness condition is satisfied, and finally as principal bundles are locally trivial, they become isomorphic when restricted to sufficiently small open sets; thus the transitivity condition is satisfied as well.
The most general definition of gerbes are defined over asite. Given a siteC{\displaystyle {\mathcal {C}}}aC{\displaystyle {\mathcal {C}}}-gerbeG{\displaystyle G}[2][3]: 129is a category fibered in groupoidsG→C{\displaystyle G\to {\mathcal {C}}}such that
Note that for a siteC{\displaystyle {\mathcal {C}}}with a final objecte{\displaystyle e}, a category fibered in groupoidsG→C{\displaystyle G\to {\mathcal {C}}}is aC{\displaystyle {\mathcal {C}}}-gerbe admits a local section, meaning satisfies the first axiom, ifOb(Ge)≠∅{\displaystyle {\text{Ob}}(G_{e})\neq \varnothing }.
One of the main motivations for considering gerbes on a site is to consider the following naive question: if the Cech cohomology groupH1(U,G){\displaystyle H^{1}({\mathcal {U}},G)}for a suitable coveringU={Ui}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}of a spaceX{\displaystyle X}gives the isomorphism classes of principalG{\displaystyle G}-bundles overX{\displaystyle X}, what does the iterated cohomology functorH1(−,H1(−,G)){\displaystyle H^{1}(-,H^{1}(-,G))}represent? Meaning, we are gluing together the groupsH1(Ui,G){\displaystyle H^{1}(U_{i},G)}via some one cocycle. Gerbes are a technical response for this question: they give geometric representations of elements in the higher cohomology groupH2(U,G){\displaystyle H^{2}({\mathcal {U}},G)}. It is expected this intuition should hold forhigher gerbes.
One of the main theorems concerning gerbes is their cohomological classification whenever they have automorphism groups given by a fixed sheaf of abelian groupsL_{\displaystyle {\underline {L}}},[5][2]called a band. For a gerbeX{\displaystyle {\mathcal {X}}}on a siteC{\displaystyle {\mathcal {C}}}, an objectU∈Ob(C){\displaystyle U\in {\text{Ob}}({\mathcal {C}})}, and an objectx∈Ob(X(U)){\displaystyle x\in {\text{Ob}}({\mathcal {X}}(U))}, the automorphism group of a gerbe is defined as the automorphism groupL=Aut_X(U)(x){\displaystyle L={\underline {\text{Aut}}}_{{\mathcal {X}}(U)}(x)}. Notice this is well defined whenever the automorphism group is always the same. Given a coveringU={Ui→X}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\to X\}_{i\in I}}, there is an associated class
c(L_)∈H3(X,L_){\displaystyle c({\underline {L}})\in H^{3}(X,{\underline {L}})}
representing theisomorphism classof the gerbeX{\displaystyle {\mathcal {X}}}banded byL{\displaystyle L}.
For example, in topology, many examples of gerbes can be constructed by considering gerbes banded by the groupU(1){\displaystyle U(1)}. As the classifying spaceB(U(1))=K(Z,2){\displaystyle B(U(1))=K(\mathbb {Z} ,2)}is the secondEilenberg–Maclanespace for the integers, a bundle gerbe banded byU(1){\displaystyle U(1)}on a topological spaceX{\displaystyle X}is constructed from a homotopy class of maps in
[X,B2(U(1))]=[X,K(Z,3)]{\displaystyle [X,B^{2}(U(1))]=[X,K(\mathbb {Z} ,3)]},
which is exactly the thirdsingular homologygroupH3(X,Z){\displaystyle H^{3}(X,\mathbb {Z} )}. It has been found[6]that all gerbes representing torsion cohomology classes inH3(X,Z){\displaystyle H^{3}(X,\mathbb {Z} )}are represented by a bundle of finite dimensional algebrasEnd(V){\displaystyle {\text{End}}(V)}for a fixed complex vector spaceV{\displaystyle V}. In addition, the non-torsion classes are represented as infinite-dimensional principal bundlesPU(H){\displaystyle PU({\mathcal {H}})}of the projective group of unitary operators on a fixed infinite dimensionalseparableHilbert spaceH{\displaystyle {\mathcal {H}}}. Note this is well defined because all separable Hilbert spaces are isomorphic to the space of square-summable sequencesℓ2{\displaystyle \ell ^{2}}.
The homotopy-theoretic interpretation of gerbes comes from looking at thehomotopy fiber square
X→∗↓↓S→fB2U(1){\displaystyle {\begin{matrix}{\mathcal {X}}&\to &*\\\downarrow &&\downarrow \\S&\xrightarrow {f} &B^{2}U(1)\end{matrix}}}
analogous to how a line bundle comes from the homotopy fiber square
L→∗↓↓S→fBU(1){\displaystyle {\begin{matrix}L&\to &*\\\downarrow &&\downarrow \\S&\xrightarrow {f} &BU(1)\end{matrix}}}
whereBU(1)≃K(Z,2){\displaystyle BU(1)\simeq K(\mathbb {Z} ,2)}, givingH2(S,Z){\displaystyle H^{2}(S,\mathbb {Z} )}as the group of isomorphism classes of line bundles onS{\displaystyle S}.
There are natural examples of Gerbes that arise from studying the algebra of compactly supported complex valued functions on a paracompact spaceX{\displaystyle X}[7]pg 3. Given a coverU={Ui}{\displaystyle {\mathcal {U}}=\{U_{i}\}}ofX{\displaystyle X}there is the Cech groupoid defined as
G={∐i,jUij⇉∐Ui}{\displaystyle {\mathcal {G}}=\left\{\coprod _{i,j}U_{ij}\rightrightarrows \coprod U_{i}\right\}}
with source and target maps given by the inclusions
s:Uij↪Ujt:Uij↪Ui{\displaystyle {\begin{aligned}s:U_{ij}\hookrightarrow U_{j}\\t:U_{ij}\hookrightarrow U_{i}\end{aligned}}}
and the space of composable arrows is just
∐i,j,kUijk{\displaystyle \coprod _{i,j,k}U_{ijk}}
Then a degree 2 cohomology classσ∈H2(X;U(1)){\displaystyle \sigma \in H^{2}(X;U(1))}is just a map
σ:∐Uijk→U(1){\displaystyle \sigma :\coprod U_{ijk}\to U(1)}
We can then form a non-commutativeC*-algebraCc(G(σ)){\displaystyle C_{c}({\mathcal {G}}(\sigma ))}, which is associated to the set of compact supported complex valued functions of the space
G1=∐i,jUij{\displaystyle {\mathcal {G}}_{1}=\coprod _{i,j}U_{ij}}
It has a non-commutative product given by
a∗b(x,i,k):=∑ja(x,i,j)b(x,j,k)σ(x,i,j,k){\displaystyle a*b(x,i,k):=\sum _{j}a(x,i,j)b(x,j,k)\sigma (x,i,j,k)}
where the cohomology classσ{\displaystyle \sigma }twists the multiplication of the standardC∗{\displaystyle C^{*}}-algebra product.
LetM{\displaystyle M}be avarietyover analgebraically closed fieldk{\displaystyle k},G{\displaystyle G}analgebraic group, for exampleGm{\displaystyle \mathbb {G} _{m}}. Recall that aG-torsoroverM{\displaystyle M}is analgebraic spaceP{\displaystyle P}with an action ofG{\displaystyle G}and a mapπ:P→M{\displaystyle \pi :P\to M}, such that locally onM{\displaystyle M}(inétale topologyorfppf topology)π{\displaystyle \pi }is a direct productπ|U:G×U→U{\displaystyle \pi |_{U}:G\times U\to U}. AG-gerbe overMmay be defined in a similar way. It is anArtin stackM{\displaystyle {\mathcal {M}}}with a mapπ:M→M{\displaystyle \pi \colon {\mathcal {M}}\to M}, such that locally onM(in étale or fppf topology)π{\displaystyle \pi }is a direct productπ|U:BG×U→U{\displaystyle \pi |_{U}\colon \mathrm {B} G\times U\to U}.[8]HereBG{\displaystyle BG}denotes theclassifying stackofG{\displaystyle G}, i.e. a quotient[∗/G]{\displaystyle [*/G]}of a point by a trivialG{\displaystyle G}-action. There is no need to impose the compatibility with the group structure in that case since it is covered by the definition of a stack. The underlyingtopological spacesofM{\displaystyle {\mathcal {M}}}andM{\displaystyle M}are the same, but inM{\displaystyle {\mathcal {M}}}each point is equipped with a stabilizer group isomorphic toG{\displaystyle G}.
Every two-term complex of coherent sheaves
E∙=[E−1→dE0]{\displaystyle {\mathcal {E}}^{\bullet }=[{\mathcal {E}}^{-1}\xrightarrow {d} {\mathcal {E}}^{0}]}
on a schemeX∈Sch{\displaystyle X\in {\text{Sch}}}has a canonical sheaf of groupoids associated to it, where on an open subsetU⊆X{\displaystyle U\subseteq X}there is a two-term complex ofX(U){\displaystyle X(U)}-modules
E−1(U)→dE0(U){\displaystyle {\mathcal {E}}^{-1}(U)\xrightarrow {d} {\mathcal {E}}^{0}(U)}
giving a groupoid. It has objects given by elementsx∈E0(U){\displaystyle x\in {\mathcal {E}}^{0}(U)}and a morphismx→x′{\displaystyle x\to x'}is given by an elementy∈E−1(U){\displaystyle y\in {\mathcal {E}}^{-1}(U)}such that
dy+x=x′{\displaystyle dy+x=x'}
In order for this stack to be a gerbe, the cohomology sheafH0(E){\displaystyle {\mathcal {H}}^{0}({\mathcal {E}})}must always have a section. This hypothesis implies the category constructed above always has objects. Note this can be applied to the situation ofcomodules over Hopf-algebroidsto construct algebraic models of gerbes over affine or projective stacks (projectivity if a gradedHopf-algebroidis used). In addition, two-term spectra from the stabilization of thederived categoryof comodules of Hopf-algebroids(A,Γ){\displaystyle (A,\Gamma )}withΓ{\displaystyle \Gamma }flat overA{\displaystyle A}give additional models of gerbes that arenon-strict.
Consider a smoothprojectivecurveC{\displaystyle C}overk{\displaystyle k}of genusg>1{\displaystyle g>1}. LetMr,ds{\displaystyle {\mathcal {M}}_{r,d}^{s}}be themoduli stackofstable vector bundlesonC{\displaystyle C}of rankr{\displaystyle r}and degreed{\displaystyle d}. It has acoarse moduli spaceMr,ds{\displaystyle M_{r,d}^{s}}, which is aquasiprojective variety. These two moduli problems parametrize the same objects, but the stacky version remembersautomorphismsof vector bundles. For any stable vector bundleE{\displaystyle E}the automorphism groupAut(E){\displaystyle Aut(E)}consists only of scalar multiplications, so each point in a moduli stack has a stabilizer isomorphic toGm{\displaystyle \mathbb {G} _{m}}. It turns out that the mapMr,ds→Mr,ds{\displaystyle {\mathcal {M}}_{r,d}^{s}\to M_{r,d}^{s}}is indeed aGm{\displaystyle \mathbb {G} _{m}}-gerbe in the sense above.[9]It is a trivial gerbe if and only ifr{\displaystyle r}andd{\displaystyle d}arecoprime.
Another class of gerbes can be found using the construction of root stacks. Informally, ther{\displaystyle r}-th root stack of a line bundleL→S{\displaystyle L\to S}over aschemeis a space representing ther{\displaystyle r}-th root ofL{\displaystyle L}and is denoted
L/Sr.{\displaystyle {\sqrt[{r}]{L/S}}.\,}[10]pg 52
Ther{\displaystyle r}-th root stack ofL{\displaystyle L}has the property
⨂rL/Sr≅L{\displaystyle \bigotimes ^{r}{\sqrt[{r}]{L/S}}\cong L}
as gerbes. It is constructed as the stack
L/Sr:(Sch/S)op→Grpd{\displaystyle {\sqrt[{r}]{L/S}}:(\operatorname {Sch} /S)^{op}\to \operatorname {Grpd} }
sending anS{\displaystyle S}-schemeT→S{\displaystyle T\to S}to the category whose objects are line bundles of the form
{(M→T,αM):αM:M⊗r→∼L×ST}{\displaystyle \left\{(M\to T,\alpha _{M}):\alpha _{M}:M^{\otimes r}\xrightarrow {\sim } L\times _{S}T\right\}}
and morphisms are commutative diagrams compatible with the isomorphismsαM{\displaystyle \alpha _{M}}. This gerbe is banded by thealgebraic groupof roots of unityμr{\displaystyle \mu _{r}}, where on a coverT→S{\displaystyle T\to S}it acts on a point(M→T,αM){\displaystyle (M\to T,\alpha _{M})}by cyclically permuting the factors ofM{\displaystyle M}inM⊗r{\displaystyle M^{\otimes r}}. Geometrically, these stacks are formed as the fiber product of stacks
X×BGmBGm→BGm↓↓X→BGm{\displaystyle {\begin{matrix}X\times _{B\mathbb {G} _{m}}B\mathbb {G} _{m}&\to &B\mathbb {G} _{m}\\\downarrow &&\downarrow \\X&\to &B\mathbb {G} _{m}\end{matrix}}}
where the vertical map ofBGm→BGm{\displaystyle B\mathbb {G} _{m}\to B\mathbb {G} _{m}}comes from theKummer sequence
1→μr→Gm→(⋅)rGm→1{\displaystyle 1\xrightarrow {} \mu _{r}\xrightarrow {} \mathbb {G} _{m}\xrightarrow {(\cdot )^{r}} \mathbb {G} _{m}\xrightarrow {} 1}
This is becauseBGm{\displaystyle B\mathbb {G} _{m}}is the moduli space of line bundles, so the line bundleL→S{\displaystyle L\to S}corresponds to an object of the categoryBGm(S){\displaystyle B\mathbb {G} _{m}(S)}(considered as a point of the moduli space).
There is another related construction of root stacks with sections. Given the data above, lets:S→L{\displaystyle s:S\to L}be a section. Then ther{\displaystyle r}-th root stack of the pair(L→S,s){\displaystyle (L\to S,s)}is defined as the lax 2-functor[10][11]
(L,s)/Sr:(Sch/S)op→Grpd{\displaystyle {\sqrt[{r}]{(L,s)/S}}:(\operatorname {Sch} /S)^{op}\to \operatorname {Grpd} }
sending anS{\displaystyle S}-schemeT→S{\displaystyle T\to S}to the category whose objects are line bundles of the form
{(M→T,αM,t):αM:M⊗r→∼L×STt∈Γ(T,M)αM(t⊗r)=s}{\displaystyle \left\{(M\to T,\alpha _{M},t):{\begin{aligned}&\alpha _{M}:M^{\otimes r}\xrightarrow {\sim } L\times _{S}T\\&t\in \Gamma (T,M)\\&\alpha _{M}(t^{\otimes r})=s\end{aligned}}\right\}}
and morphisms are given similarly. These stacks can be constructed very explicitly, and are well understood for affine schemes. In fact, these form the affine models for root stacks with sections.[11]: 4Locally, we may assumeS=Spec(A){\displaystyle S={\text{Spec}}(A)}and the line bundleL{\displaystyle L}is trivial, hence any sections{\displaystyle s}is equivalent to taking an elements∈A{\displaystyle s\in A}. Then, the stack is given by the stack quotient
(L,s)/Sr=[Spec(B)/μr]{\displaystyle {\sqrt[{r}]{(L,s)/S}}=[{\text{Spec}}(B)/\mu _{r}]}[11]: 9
with
B=A[x]xr−s{\displaystyle B={\frac {A[x]}{x^{r}-s}}}
Ifs=0{\displaystyle s=0}then this gives an infinitesimal extension of[Spec(A)/μr]{\displaystyle [{\text{Spec}}(A)/\mu _{r}]}.
These and more general kinds of gerbes arise in several contexts as both geometric spaces and as formal bookkeeping tools:
Gerbes first appeared in the context ofalgebraic geometry. They were subsequently developed in a more traditional geometric framework by Brylinski (Brylinski 1993). One can think of gerbes as being a natural step in a hierarchy of mathematical objects providing geometric realizations of integralcohomologyclasses.
A more specialised notion of gerbe was introduced byMurrayand calledbundle gerbes. Essentially they are asmoothversion of abelian gerbes belonging more to the hierarchy starting withprincipal bundlesthan sheaves. Bundle gerbes have been used ingauge theoryand alsostring theory. Current work by others is developing a theory ofnon-abelian bundle gerbes.
|
https://en.wikipedia.org/wiki/Gerbe
|
Inmathematics,sineandcosinearetrigonometric functionsof anangle. The sine and cosine of an acuteangleare defined in the context of aright triangle: for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of thetriangle(thehypotenuse), and the cosine is theratioof the length of the adjacent leg to that of thehypotenuse. For an angleθ{\displaystyle \theta }, the sine and cosine functions are denoted assin(θ){\displaystyle \sin(\theta )}andcos(θ){\displaystyle \cos(\theta )}.
The definitions of sine and cosine have been extended to anyrealvalue in terms of the lengths of certain line segments in aunit circle. More modern definitions express the sine and cosine asinfinite series, or as the solutions of certaindifferential equations, allowing their extension to arbitrary positive and negative values and even tocomplex numbers.
The sine and cosine functions are commonly used to modelperiodicphenomena such assoundandlight waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period.
To define the sine and cosine of an acute angleα{\displaystyle \alpha }, start with aright trianglethat contains an angle of measureα{\displaystyle \alpha }; in the accompanying figure, angleα{\displaystyle \alpha }in a right triangleABC{\displaystyle ABC}is the angle of interest. The three sides of the triangle are named as follows:[1]
Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse:[1]sin(α)=oppositehypotenuse,cos(α)=adjacenthypotenuse.{\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.}
The other trigonometric functions of the angle can be defined similarly; for example, thetangentis the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. Thereciprocalof sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as:[1]tan(θ)=sin(θ)cos(θ)=oppositeadjacent,cot(θ)=1tan(θ)=adjacentopposite,csc(θ)=1sin(θ)=hypotenuseopposite,sec(θ)=1cos(θ)=hypotenuseadjacent.{\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}}
As stated, the valuessin(α){\displaystyle \sin(\alpha )}andcos(α){\displaystyle \cos(\alpha )}appear to depend on the choice of a right triangle containing an angle of measureα{\displaystyle \alpha }. However, this is not the case as all such triangles aresimilar, and so the ratios are the same for each of them. For example, eachlegof the 45-45-90 right triangle is 1 unit, and its hypotenuse is2{\displaystyle {\sqrt {2}}}; therefore,sin45∘=cos45∘=22{\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}}.[2]The following table shows the special value of each input for both sine and cosine with the domain between0<α<π2{\textstyle 0<\alpha <{\frac {\pi }{2}}}. The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator.[3][4]
Thelaw of sinesis useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known.[5]Given a triangleABC{\displaystyle ABC}with sidesa{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, and angles opposite those sidesα{\displaystyle \alpha },β{\displaystyle \beta }, andγ{\displaystyle \gamma }, the law states,sinαa=sinβb=sinγc.{\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.}This is equivalent to the equality of the first three expressions below:asinα=bsinβ=csinγ=2R,{\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,}whereR{\displaystyle R}is the triangle'scircumradius.
Thelaw of cosinesis useful for computing the length of an unknown side if two other sides and an angle are known.[5]The law states,a2+b2−2abcos(γ)=c2{\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}}In the case whereγ=π/2{\displaystyle \gamma =\pi /2}from whichcos(γ)=0{\displaystyle \cos(\gamma )=0}, the resulting equation becomes thePythagorean theorem.[6]
Thecross productanddot productare operations on twovectorsinEuclidean vector space. The sine and cosine functions can be defined in terms of the cross product and dot product. Ifa{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }are vectors, andθ{\displaystyle \theta }is the angle betweena{\displaystyle \mathbb {a} }andb{\displaystyle \mathbb {b} }, then sine and cosine can be defined as:sin(θ)=|a×b||a||b|,cos(θ)=a⋅b|a||b|.{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}}
The sine and cosine functions may also be defined in a more general way by usingunit circle, a circle of radius one centered at the origin(0,0){\displaystyle (0,0)}, formulated as the equation ofx2+y2=1{\displaystyle x^{2}+y^{2}=1}in theCartesian coordinate system. Let a line through the origin intersect the unit circle, making an angle ofθ{\displaystyle \theta }with the positive half of thex{\displaystyle x}-axis. Thex{\displaystyle x}-andy{\displaystyle y}-coordinates of this point of intersection are equal tocos(θ){\displaystyle \cos(\theta )}andsin(θ){\displaystyle \sin(\theta )}, respectively; that is,[7]sin(θ)=y,cos(θ)=x.{\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.}
This definition is consistent with the right-angled triangle definition of sine and cosine when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply they{\displaystyle y}-coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when0<θ<π2{\textstyle 0<\theta <{\frac {\pi }{2}}}, even under the new definition using the unit circle.[8][9]
Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the inputθ>0{\displaystyle \theta >0}. In a sine function, if the input isθ=π2{\textstyle \theta ={\frac {\pi }{2}}}, the point is rotated counterclockwise and stopped exactly on they{\displaystyle y}-axis. Ifθ=π{\displaystyle \theta =\pi }, the point is at the circle's halfway. Ifθ=2π{\displaystyle \theta =2\pi }, the point returned to its origin. This results that both sine and cosine functions have therangebetween−1≤y≤1{\displaystyle -1\leq y\leq 1}.[10]
Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from they{\displaystyle y}-coordinate. In other words, both sine and cosine functions areperiodic, meaning any angle added by the circumference's circle is the angle itself. Mathematically,[11]sin(θ+2π)=sin(θ),cos(θ+2π)=cos(θ).{\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).}
A functionf{\displaystyle f}is said to beoddiff(−x)=−f(x){\displaystyle f(-x)=-f(x)}, and is said to beeveniff(−x)=f(x){\displaystyle f(-x)=f(x)}. The sine function is odd, whereas the cosine function is even.[12]Both sine and cosine functions are similar, with their difference beingshiftedbyπ2{\textstyle {\frac {\pi }{2}}}. This means,[13]sin(θ)=cos(π2−θ),cos(θ)=sin(π2−θ).{\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}}
Zero is the only realfixed pointof the sine function; in other words the only intersection of the sine function and theidentity functionissin(0)=0{\displaystyle \sin(0)=0}. The only real fixed point of the cosine function is called theDottie number. The Dottie number is the unique real root of the equationcos(x)=x{\displaystyle \cos(x)=x}. The decimal expansion of the Dottie number is approximately 0.739085.[14]
The sine and cosine functions are infinitely differentiable.[15]The derivative of sine is cosine, and the derivative of cosine is negative sine:[16]ddxsin(x)=cos(x),ddxcos(x)=−sin(x).{\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).}Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself.[15]These derivatives can be applied to thefirst derivative test, according to which themonotonicityof a function can be defined as the inequality of function's first derivative greater or less than equal to zero.[17]It can also be applied tosecond derivative test, according to which theconcavityof a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero.[18]The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign (+{\displaystyle +}) denotes a graph is increasing (going upward) and the negative sign (−{\displaystyle -}) is decreasing (going downward)—in certain intervals.[19]This information can be represented as a Cartesian coordinates system divided into four quadrants.
Both sine and cosine functions can be defined by using differential equations. The pair of(cosθ,sinθ){\displaystyle (\cos \theta ,\sin \theta )}is the solution(x(θ),y(θ)){\displaystyle (x(\theta ),y(\theta ))}to the two-dimensional system ofdifferential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}with theinitial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}. One could interpret the unit circle in the above definitions as defining thephase space trajectoryof the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equationsy′(θ)=x(θ){\displaystyle y'(\theta )=x(\theta )}andx′(θ)=−y(θ){\displaystyle x'(\theta )=-y(\theta )}starting from the initial conditionsy(0)=0{\displaystyle y(0)=0}andx(0)=1{\displaystyle x(0)=1}.[citation needed]
Their area under a curve can be obtained by using theintegralwith a certain bounded interval. Their antiderivatives are:∫sin(x)dx=−cos(x)+C∫cos(x)dx=sin(x)+C,{\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,}whereC{\displaystyle C}denotes theconstant of integration.[20]These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, thearc lengthof the sine curve between0{\displaystyle 0}andt{\displaystyle t}is∫0t1+cos2(x)dx=2E(t,12),{\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),}whereE(φ,k){\displaystyle \operatorname {E} (\varphi ,k)}is theincomplete elliptic integral of the second kindwith modulusk{\displaystyle k}. It cannot be expressed usingelementary functions.[21]In the case of a full period, its arc length isL=42π3Γ(1/4)2+Γ(1/4)22π=2πϖ+2ϖ≈7.6404…{\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots }whereΓ{\displaystyle \Gamma }is thegamma functionandϖ{\displaystyle \varpi }is thelemniscate constant.[22]
Theinverse functionof sine is arcsine or inverse sine, denoted as "arcsin", "asin", orsin−1{\displaystyle \sin ^{-1}}.[23]The inverse function of cosine is arccosine, denoted as "arccos", "acos", orcos−1{\displaystyle \cos ^{-1}}.[a]As sine and cosine are notinjective, their inverses are not exact inverse functions, but partial inverse functions. For example,sin(0)=0{\displaystyle \sin(0)=0}, but alsosin(π)=0{\displaystyle \sin(\pi )=0},sin(2π)=0{\displaystyle \sin(2\pi )=0}, and so on. It follows that the arcsine function is multivalued:arcsin(0)=0{\displaystyle \arcsin(0)=0}, but alsoarcsin(0)=π{\displaystyle \arcsin(0)=\pi },arcsin(0)=2π{\displaystyle \arcsin(0)=2\pi }, and so on. When only one value is desired, the function may be restricted to itsprincipal branch. With this restriction, for eachx{\displaystyle x}in the domain, the expressionarcsin(x){\displaystyle \arcsin(x)}will evaluate only to a single value, called itsprincipal value. The standard range of principal values for arcsin is from−π2{\textstyle -{\frac {\pi }{2}}}toπ2{\textstyle {\frac {\pi }{2}}}, and the standard range for arccos is from0{\displaystyle 0}toπ{\displaystyle \pi }.[24]
The inverse function of both sine and cosine are defined as:[citation needed]θ=arcsin(oppositehypotenuse)=arccos(adjacenthypotenuse),{\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),}where for some integerk{\displaystyle k},sin(y)=x⟺y=arcsin(x)+2πk,ory=π−arcsin(x)+2πkcos(y)=x⟺y=arccos(x)+2πk,ory=−arccos(x)+2πk{\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}}By definition, both functions satisfy the equations:[citation needed]sin(arcsin(x))=xcos(arccos(x))=x{\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x}andarcsin(sin(θ))=θfor−π2≤θ≤π2arccos(cos(θ))=θfor0≤θ≤π{\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}}
According toPythagorean theorem, the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in thePythagorean trigonometric identity, the sum of a squared sine and a squared cosine equals 1:[25][b]sin2(θ)+cos2(θ)=1.{\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.}
Sine and cosine satisfy the following double-angle formulas:[26]sin(2θ)=2sin(θ)cos(θ),cos(2θ)=cos2(θ)−sin2(θ)=2cos2(θ)−1=1−2sin2(θ){\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}}
The cosine double angle formula implies that sin2and cos2are, themselves, shifted and scaled sine waves. Specifically,[27]sin2(θ)=1−cos(2θ)2cos2(θ)=1+cos(2θ)2{\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}}The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods.[citation needed]
Both sine and cosine functions can be defined by using aTaylor series, apower seriesinvolving the higher-order derivatives. As mentioned in§ Continuity and differentiation, thederivativeof sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives ofsin(x){\displaystyle \sin(x)}arecos(x){\displaystyle \cos(x)},−sin(x){\displaystyle -\sin(x)},−cos(x){\displaystyle -\cos(x)},sin(x){\displaystyle \sin(x)}, continuing to repeat those four functions. The(4n+k){\displaystyle (4n+k)}-th derivative, evaluated at the point 0:sin(4n+k)(0)={0whenk=01whenk=10whenk=2−1whenk=3{\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}}where the superscript represents repeated differentiation. This implies the following Taylor series expansion atx=0{\displaystyle x=0}. One can then use the theory ofTaylor seriesto show that the following identities hold for allreal numbersx{\displaystyle x}—wherex{\displaystyle x}is the angle in radians.[28]More generally, for allcomplex numbers:[29]sin(x)=x−x33!+x55!−x77!+⋯=∑n=0∞(−1)n(2n+1)!x2n+1{\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}}Taking the derivative of each term gives the Taylor series for cosine:[28][29]cos(x)=1−x22!+x44!−x66!+⋯=∑n=0∞(−1)n(2n)!x2n{\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}}
Both sine and cosine functions with multiple angles may appear as theirlinear combination, resulting in a polynomial. Such a polynomial is known as thetrigonometric polynomial. The trigonometric polynomial's ample applications may be acquired inits interpolation, and its extension of a periodic function known as theFourier series. Letan{\displaystyle a_{n}}andbn{\displaystyle b_{n}}be any coefficients, then the trigonometric polynomial of a degreeN{\displaystyle N}—denoted asT(x){\displaystyle T(x)}—is defined as:[30][31]T(x)=a0+∑n=1Nancos(nx)+∑n=1Nbnsin(nx).{\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).}
Thetrigonometric seriescan be defined similarly analogous to the trigonometric polynomial, its infinite inversion. LetAn{\displaystyle A_{n}}andBn{\displaystyle B_{n}}be any coefficients, then the trigonometric series can be defined as:[32]12A0+∑n=1∞Ancos(nx)+Bnsin(nx).{\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).}In the case of a Fourier series with a given integrable functionf{\displaystyle f}, the coefficients of a trigonometric series are:[33]An=1π∫02πf(x)cos(nx)dx,Bn=1π∫02πf(x)sin(nx)dx.{\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}}
Both sine and cosine can be extended further viacomplex number, a set of numbers composed of bothrealandimaginary numbers. For real numberθ{\displaystyle \theta }, the definition of both sine and cosine functions can be extended in acomplex planein terms of anexponential functionas follows:[34]sin(θ)=eiθ−e−iθ2i,cos(θ)=eiθ+e−iθ2,{\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}}
Alternatively, both functions can be defined in terms ofEuler's formula:[34]eiθ=cos(θ)+isin(θ),e−iθ=cos(θ)−isin(θ).{\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}}
When plotted on thecomplex plane, the functioneix{\displaystyle e^{ix}}for real values ofx{\displaystyle x}traces out theunit circlein the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts ofeiθ{\displaystyle e^{i\theta }}as:[35]sinθ=Im(eiθ),cosθ=Re(eiθ).{\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}}
Whenz=x+iy{\displaystyle z=x+iy}for real valuesx{\displaystyle x}andy{\displaystyle y}, wherei=−1{\displaystyle i={\sqrt {-1}}}, both sine and cosine functions can be expressed in terms of real sines, cosines, andhyperbolic functionsas:[citation needed]sinz=sinxcoshy+icosxsinhy,cosz=cosxcoshy−isinxsinhy.{\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}}
Sine and cosine are used to connect the real and imaginary parts of acomplex numberwith itspolar coordinates(r,θ){\displaystyle (r,\theta )}:z=r(cos(θ)+isin(θ)),{\displaystyle z=r(\cos(\theta )+i\sin(\theta )),}and the real and imaginary parts areRe(z)=rcos(θ),Im(z)=rsin(θ),{\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}}wherer{\displaystyle r}andθ{\displaystyle \theta }represent the magnitude and angle of the complex numberz{\displaystyle z}.
For any real numberθ{\displaystyle \theta }, Euler's formula in terms of polar coordinates is stated asz=reiθ{\textstyle z=re^{i\theta }}.
Applying the series definition of the sine and cosine to a complex argument,z, gives:
where sinh and cosh are thehyperbolic sine and cosine. These areentire functions.
It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument:
Using the partial fraction expansion technique incomplex analysis, one can find that the infinite series∑n=−∞∞(−1)nz−n=1z−2z∑n=1∞(−1)nn2−z2{\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}}both converge and are equal toπsin(πz){\textstyle {\frac {\pi }{\sin(\pi z)}}}. Similarly, one can show thatπ2sin2(πz)=∑n=−∞∞1(z−n)2.{\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.}
Using product expansion technique, one can derivesin(πz)=πz∏n=1∞(1−z2n2).{\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).}
sin(z) is found in thefunctional equationfor theGamma function,
which in turn is found in thefunctional equationfor theRiemann zeta-function,
As aholomorphic function, sinzis a 2D solution ofLaplace's equation:
The complex sine function is also related to the level curves ofpendulums.[how?][36][better source needed]
The wordsineis derived, indirectly, from theSanskritwordjyā'bow-string' or more specifically its synonymjīvá(both adopted fromAncient Greekχορδή'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (seejyā, koti-jyā and utkrama-jyā;sineandchordare closely related in a circle of unit diameter, seePtolemy’s Theorem). This wastransliteratedinArabicasjība, which is meaningless in that language and written asjb(جب). Since Arabic is written without short vowels,jbwas interpreted as thehomographjayb(جيب), which means 'bosom', 'pocket', or 'fold'.[37][38]When the Arabic texts ofAl-Battaniandal-Khwārizmīwere translated intoMedieval Latinin the 12th century byGerard of Cremona, he used the Latin equivalentsinus(which also means 'bay' or 'fold', and more specifically 'the hanging fold of atogaover the breast').[39][40][41]Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage.[42][43]The English formsinewas introduced inThomas Fale's 1593Horologiographia.[44]
The wordcosinederives from an abbreviation of the Latincomplementi sinus'sine of thecomplementary angle' ascosinusinEdmund Gunter'sCanon triangulorum(1620), which also includes a similar definition ofcotangens.[45]
While the early study of trigonometry can be traced to antiquity, thetrigonometric functionsas they are in use today were developed in the medieval period. Thechordfunction was discovered byHipparchusofNicaea(180–125 BCE) andPtolemyofRoman Egypt(90–165 CE).[46]
The sine and cosine functions are closely related to thejyāandkoṭi-jyāfunctions used inIndian astronomyduring theGupta period(AryabhatiyaandSurya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin.[39][47]
All six trigonometric functions in current use were known inIslamic mathematicsby the 9th century, as was thelaw of sines, used insolving triangles.[48]Al-Khwārizmī(c. 780–850) produced tables of sines, cosines and tangents.[49][50]Muhammad ibn Jābir al-Harrānī al-Battānī(853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°.[50]
In the early 17th-century, the French mathematicianAlbert Girardpublished the first use of the abbreviationssin,cos, andtan; these were further promulgated by Euler (see below). TheOpus palatinum de triangulisofGeorg Joachim Rheticus, a student ofCopernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596.
In a paper published in 1682,Leibnizproved that sinxis not analgebraic functionofx.[51]Roger Cotescomputed the derivative of sine in hisHarmonia Mensurarum(1722).[52]Leonhard Euler'sIntroductio in analysin infinitorum(1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviationssin.,cos.,tang.,cot.,sec., andcosec.[39]
There is no standard algorithm for calculating sine and cosine.IEEE 754, the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs.[53]
Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g.sin(1022).
A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, orlinearly interpolatebetween the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage.[citation needed]
TheCORDICalgorithm is commonly used in scientific calculators.
The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated tosinandcos.
Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387.
In programming languages,sinandcosare typically either a built-in function or found within the language's standard math library. For example, theC standard librarydefines sine functions withinmath.h:sin(double),sinf(float), andsinl(long double). The parameter of each is afloating pointvalue, specifying the angle in radians. Each function returns the samedata typeas it accepts. Many other trigonometric functions are also defined inmath.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly,Pythondefinesmath.sin(x)andmath.cos(x)within the built-inmathmodule. Complex sine and cosine functions are also available within thecmathmodule, e.g.cmath.sin(z).CPython's math functions call theCmathlibrary, and use adouble-precision floating-point format.
Some software libraries provide implementations of sine and cosine using the input angle in half-turns, a half-turn being an angle of 180 degrees orπ{\displaystyle \pi }radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases.[54][55]These functions are calledsinpiandcospiin MATLAB,[54]OpenCL,[56]R,[55]Julia,[57]CUDA,[58]and ARM.[59]For example,sinpi(x)would evaluate tosin(πx),{\displaystyle \sin(\pi x),}wherexis expressed in half-turns, and consequently the final input to the function,πxcan be interpreted in radians bysin.
The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing2π{\displaystyle 2\pi },π{\displaystyle \pi }, andπ2{\textstyle {\frac {\pi }{2}}}in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits.
Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing moduloπ2{\textstyle {\frac {\pi }{2}}}involves inaccuracies in representingπ2{\textstyle {\frac {\pi }{2}}}.
For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution.[60]If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation toπ2048{\textstyle {\frac {\pi }{2048}}}would be incurred.
|
https://en.wikipedia.org/wiki/Sine
|
Inmathematics, adynamical systemis a system in which afunctiondescribes thetimedependence of apointin anambient space, such as in aparametric curve. Examples include themathematical modelsthat describe the swinging of a clockpendulum,the flow of water in a pipe, therandom motion of particles in the air, andthe number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such asordinary differential equationsandergodic theoryby allowing different choices of the space and how time is measured.[citation needed]Time can be measured by integers, byrealorcomplex numbersor can be a more general algebraic object, losing the memory of its physical origin, and the space may be amanifoldor simply aset, without the need of asmoothspace-time structure defined on it.
At any given time, a dynamical system has astaterepresenting a point in an appropriatestate space. This state is often given by atupleofreal numbersor by avectorin a geometrical manifold. Theevolution ruleof the dynamical system is a function that describes what future states follow from the current state. Often the function isdeterministic, that is, for a given time interval only one future state follows from the current state.[1][2]However, some systems arestochastic, in that random events also affect the evolution of the state variables.
The study of dynamical systems is the focus ofdynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics,[3][4]biology,[5]chemistry,engineering,[6]economics,[7]history, andmedicine. Dynamical systems are a fundamental part ofchaos theory,logistic mapdynamics,bifurcation theory, theself-assemblyandself-organizationprocesses, and theedge of chaosconcept.
The concept of a dynamical system has its origins inNewtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either adifferential equation,difference equationor othertime scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to assolving the systemorintegrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as atrajectoryororbit.
Before the advent ofcomputers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
Many people regard French mathematicianHenri Poincaréas the founder of dynamical systems.[8]Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included thePoincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Aleksandr Lyapunovdeveloped many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
In 1913,George David Birkhoffproved Poincaré's "Last Geometric Theorem", a special case of thethree-body problem, a result that made him world-famous. In 1927, he published hisDynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called theergodic theorem. Combining insights fromphysicson theergodic hypothesiswithmeasure theory, this theorem solved, at least in principle, a fundamental problem ofstatistical mechanics. The ergodic theorem has also had repercussions for dynamics.
Stephen Smalemade significant advances as well. His first contribution was theSmale horseshoethat jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Oleksandr Mykolaiovych SharkovskydevelopedSharkovsky's theoremon the periods ofdiscrete dynamical systemsin 1964. One of the implications of the theorem is that if a discrete dynamical system on thereal linehas aperiodic pointof period 3, then it must have periodic points of every other period.
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineerAli H. Nayfehappliednonlinear dynamicsinmechanicalandengineeringsystems.[9]His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance ofmachinesandstructuresthat are common in daily life, such asships,cranes,bridges,buildings,skyscrapers,jet engines,rocket engines,aircraftandspacecraft.[10]
In the most general sense,[11][12]adynamical systemis atuple(T,X, Φ) whereTis amonoid, written additively,Xis a non-emptysetand Φ is afunction
with
and for anyxinX:
fort1,t2+t1∈I(x){\displaystyle \,t_{1},\,t_{2}+t_{1}\in I(x)}andt2∈I(Φ(t1,x)){\displaystyle \ t_{2}\in I(\Phi (t_{1},x))}, where we have defined the setI(x):={t∈T:(t,x)∈U}{\displaystyle I(x):=\{t\in T:(t,x)\in U\}}for anyxinX.
In particular, in the case thatU=T×X{\displaystyle U=T\times X}we have for everyxinXthatI(x)=T{\displaystyle I(x)=T}and thus that Φ defines amonoid actionofTonX.
The function Φ(t,x) is called theevolution functionof the dynamical system: it associates to every pointxin the setXa unique image, depending on the variablet, called theevolution parameter.Xis calledphase spaceorstate space, while the variablexrepresents aninitial stateof the system.
We often write
if we take one of the variables as constant. The function
is called theflowthroughxand itsgraphis called thetrajectorythroughx. The set
is called theorbitthroughx.
The orbit throughxis theimageof the flow throughx.
A subsetSof the state spaceXis called Φ-invariantif for allxinSand alltinT
Thus, in particular, ifSis Φ-invariant,I(x)=T{\displaystyle I(x)=T}for allxinS. That is, the flow throughxmust be defined for all time for every element ofS.
More commonly there are two classes of definitions for a dynamical system: one is motivated byordinary differential equationsand is geometrical in flavor; and the other is motivated byergodic theoryand ismeasure theoreticalin flavor.
In the geometrical definition, a dynamical system is the tuple⟨T,M,f⟩{\displaystyle \langle {\mathcal {T}},{\mathcal {M}},f\rangle }.T{\displaystyle {\mathcal {T}}}is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative.M{\displaystyle {\mathcal {M}}}is amanifold, i.e. locally a Banach space or Euclidean space, or in the discrete case agraph.fis an evolution rulet→ft(witht∈T{\displaystyle t\in {\mathcal {T}}}) such thatftis adiffeomorphismof the manifold to itself. So, f is a "smooth" mapping of the time-domainT{\displaystyle {\mathcal {T}}}into the space of diffeomorphisms of the manifold to itself. In other terms,f(t) is a diffeomorphism, for every timetin the domainT{\displaystyle {\mathcal {T}}}.
Areal dynamical system,real-time dynamical system,continuous timedynamical system, orflowis a tuple (T,M, Φ) withTanopen intervalin thereal numbersR,Mamanifoldlocallydiffeomorphicto aBanach space, and Φ acontinuous function. If Φ iscontinuously differentiablewe say the system is adifferentiable dynamical system. If the manifoldMis locally diffeomorphic toRn, the dynamical system isfinite-dimensional; if not, the dynamical system isinfinite-dimensional. This does not assume asymplectic structure. WhenTis taken to be the reals, the dynamical system is calledglobalor aflow; and ifTis restricted to the non-negative reals, then the dynamical system is asemi-flow.
Adiscrete dynamical system,discrete-timedynamical systemis a tuple (T,M, Φ), whereMis amanifoldlocally diffeomorphic to aBanach space, and Φ is a function. WhenTis taken to be the integers, it is acascadeor amap. IfTis restricted to the non-negative integers we call the system asemi-cascade.[13]
Acellular automatonis a tuple (T,M, Φ), withTalatticesuch as theintegersor a higher-dimensionalinteger grid,Mis a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As suchcellular automataare dynamical systems. The lattice inMrepresents the "space" lattice, while the one inTrepresents the "time" lattice.
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore calledmultidimensional systems. Such systems are useful for modeling, for example,image processing.
Given a global dynamical system (R,X, Φ) on alocally compactandHausdorfftopological spaceX, it is often useful to study the continuous extension Φ* of Φ to theone-point compactificationX*ofX. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R,X*, Φ*).
In compact dynamical systems thelimit setof any orbit isnon-empty,compactandsimply connected.
A dynamical system may be defined formally as a measure-preserving transformation of ameasure space, the triplet (T, (X, Σ,μ), Φ). Here,Tis a monoid (usually the non-negative integers),Xis aset, and (X, Σ,μ) is aprobability space, meaning that Σ is asigma-algebraonXand μ is a finitemeasureon (X, Σ). A map Φ:X→Xis said to beΣ-measurableif and only if, for every σ in Σ, one hasΦ−1σ∈Σ{\displaystyle \Phi ^{-1}\sigma \in \Sigma }. A map Φ is said topreserve the measureif and only if, for everyσin Σ, one hasμ(Φ−1σ)=μ(σ){\displaystyle \mu (\Phi ^{-1}\sigma )=\mu (\sigma )}. Combining the above, a map Φ is said to be ameasure-preserving transformation ofX, if it is a map fromXto itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ,μ), Φ), for such a Φ, is then defined to be adynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems theiteratesΦn=Φ∘Φ∘⋯∘Φ{\displaystyle \Phi ^{n}=\Phi \circ \Phi \circ \dots \circ \Phi }for every integernare studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called theKrylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as theLiouville measureinHamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaoticdissipative systemsthe choice of invariant measure is technically more challenging. The measure needs to be supported on theattractor, but attractors have zeroLebesgue measureand the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, theSinai–Ruelle–Bowen measuresappear to be the natural choice. They are constructed on the geometrical structure ofstable and unstable manifoldsof the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
The concept ofevolution in timeis central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior ofclassical mechanical systems. But a system ofordinary differential equationsmust be solved before it becomes a dynamic system. For example, consider aninitial value problemsuch as the following:
where
There is no need for higher order derivatives in the equation, nor for the parametertinv(t,x), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
The dynamical system is then (T,M, Φ).
Some formal manipulation of the system ofdifferential equationsshown above gives a more general form of equations a dynamical system must satisfy
whereG:(T×M)M→C{\displaystyle {\mathfrak {G}}:{{(T\times M)}^{M}}\to \mathbf {C} }is afunctionalfrom the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locallyBanach spaces—in which case the differential equations arepartial differential equations.
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is theN-dimensional Euclidean space, so any point in phase space can be represented by a vector withNnumbers. The analysis of linear systems is possible because they satisfy asuperposition principle: ifu(t) andw(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so willu(t) +w(t).
For aflow, the vector field v(x) is anaffinefunction of the position in the phase space, that is,
withAa matrix,ba vector of numbers andxthe position vector. The solution to this system can be found by using the superposition principle (linearity).
The caseb≠ 0 withA= 0 is just a straight line in the direction ofb:
Whenbis zero andA≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, ifx0= 0, then the orbit remains there.
For other initial conditions, the equation of motion is given by theexponential of a matrix: for an initial pointx0,
Whenb= 0, theeigenvaluesofAdetermine the structure of the phase space. From the eigenvalues and theeigenvectorsofAit is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the caseA≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions forchaotic behavior.
Adiscrete-time,affinedynamical system has the form of amatrix difference equation:
withAa matrix andba vector. As in the continuous case, the change of coordinatesx→x+ (1 −A)–1bremoves the termbfrom the equation. In the newcoordinate system, the origin is a fixed point of the map and the solutions are of the linear systemAnx0.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors ofAdetermine the structure of phase space. For example, ifu1is an eigenvector ofA, with a real eigenvalue smaller than one, then the straight lines given by the points alongαu1, withα∈R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also manyother discrete dynamical systems.
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): asingular pointof the vector field (a point wherev(x) = 0) will remain a singular point under smooth transformations; aperiodic orbitis a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
A flow in most small patches of the phase space can be made very simple. Ifyis a point where the vector fieldv(y) ≠ 0, then there is a change of coordinates for a region aroundywhere the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
Therectification theoremsays that away fromsingular pointsthe dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase spaceMthe dynamical system isintegrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (wherev(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a pointx0in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular tov(x0). These points are aPoincaré sectionS(γ,x0), of the orbit. The flow now defines a map, thePoincaré mapF:S→S, for points starting inSand returning toS. Not all these points will take the same amount of time to come back, but the times will be close to the time it takesx0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré mapF. By a translation, the point can be assumed to be atx= 0. The Taylor series of the map isF(x) =J·x+ O(x2), so a change of coordinateshcan only be expected to simplifyFto its linear part
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. Ifλ1, ...,λνare the eigenvalues ofJthey will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the formλi– Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the functionh, the non-resonant condition is also known as the small divisor problem.
The results on the existence of a solution to the conjugation equation depend on the eigenvalues ofJand the degree of smoothness required fromh. AsJdoes not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues ofJare not in the unit circle, the dynamics near the fixed pointx0ofFis calledhyperbolicand when the eigenvalues are on the unit circle and complex, the dynamics is calledelliptic.
In the hyperbolic case, theHartman–Grobman theoremgives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear mapJ·x. The hyperbolic case is alsostructurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues ofJin the complex plane, implying that the map is still hyperbolic.
TheKolmogorov–Arnold–Moser (KAM)theorem gives the behavior near an elliptic point.
When the evolution map Φt(or thevector fieldit is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in thephase spaceuntil a special valueμ0is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically afixed point, a periodic orbit, or an invarianttorus) and studies its behavior as a function of the parameterμ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed pointx0of a system familyFμcan be characterized by theeigenvaluesof the first derivative of the systemDFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues ofDFμon the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article onBifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, theRuelle–Takens scenariodescribes how a periodic orbit bifurcates into a torus and the torus into astrange attractor. In another example,Feigenbaum period-doublingdescribes how a stable periodic orbit goes through a series ofperiod-doubling bifurcations.
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subsetAinto the points Φt(A) and invariance of the phase space means that
In theHamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by theLiouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered therecurrence theorem: Assume the phase space has a finite Liouville volume and letFbe a phase space volume-preserving map andAa subset of the phase space. Then almost every point ofAreturns toAinfinitely often. The Poincaré recurrence theorem was used byZermeloto object toBoltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called theergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a regionAis vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development ofstatistical mechanicsand a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems.Koopmanapproached the study of ergodic systems by the use offunctional analysis. An observableais a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φt. This introduces an operatorUt, thetransfer operator,
By studying the spectral properties of the linear operatorUit becomes possible to classify the ergodic properties of Φt. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φtgets mapped into an infinite-dimensional linear problem involvingU.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed inequilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with theBoltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems.SRB measuresreplace the Boltzmann factor and they are defined on attractors of chaotic systems.
Simple nonlinear dynamical systems, includingpiecewise linearsystems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been calledchaos.Hyperbolic systemsare precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems thetangent spacesperpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (thestable manifold) and another of the points that diverge from the orbit (theunstable manifold).
This branch ofmathematicsdeals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to asteady statein the long term, and if so, what are the possibleattractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue.Meteorologyhas been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. ThePomeau–Manneville scenarioof thelogistic mapand theFermi–Pasta–Ulam–Tsingou problemarose with just second-degree polynomials; thehorseshoe mapis piecewise linear.
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,[14]meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen forLipschitz continuousdifferential equations according to the proof of thePicard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
As example, the equation:
Admits the finite duration solution:
that is zero fort≥2{\displaystyle t\geq 2}and is not Lipschitz continuous at its ending timet=2.{\displaystyle t=2.}
Works providing a broad coverage:
Introductory texts with a unique perspective:
Textbooks
Popularizations:
|
https://en.wikipedia.org/wiki/Discrete-time_dynamical_system
|
Real-time data(RTD) is information that is delivered immediately after collection. There is no delay in the timeliness of the information provided. Real-time data is often used for navigation or tracking.[1]Such data is usuallyprocessedusingreal-time computingalthough it can also be stored for later or off-linedata analysis.
Real-time data is not the same asdynamic data. Real-time data can be dynamic (e.g. a variable indicating current location) or static (e.g. a fresh log entry indicating location at a specific time).
Real-time economic data, and otherofficial statistics, are often based on preliminary estimates, and therefore are frequently adjusted as better estimates become available. These later adjusted data are called "revised data".
The terms real-time economic data and real-time economic analysis were coined[2]by Francis
X. Diebold and Glenn D. Rudebusch.[3]MacroeconomistGlenn D. Rudebusch defined real-time analysis as 'the use of sequential information sets that were actually available as history unfolded.'[4]MacroeconomistAthanasios Orphanideshas argued that economic policy rules may have very different effects when based on error-prone real-time data (as they inevitably are in reality) than they would if policy makers followed the same rules but had more accurate data available.[5]
In order to better understand the accuracy of economic data and its effects on economic decisions, some economic organizations, such as theFederal Reserve Bank of St. Louis,Federal Reserve Bank of Philadelphiaand the Euro-Area Business Cycle Network (EABCN), have made databases available that contain both real-time data and subsequent revised estimates of the same data.
Real-time biddingis programmatic real-time auctions that sell digital-ad impressions. Entities on both the buying and selling sides require almost instantaneous access to data in order to make decisions, forcing real-time data to the forefront of their needs.[6]To support these needs, new strategies and technologies, suchDruidhave arisen and are quickly evolving.[7]
|
https://en.wikipedia.org/wiki/Real-time_data
|
Graphical perceptionis the human capacity forvisuallyinterpreting information ongraphs and charts. Both quantitative and qualitative information can be said to be encoded into the image, and the human capacity to interpret it is sometimes called decoding.[1]The importance of human graphical perception, what we discern easily versus what our brains have more difficulty decoding, is fundamental to goodstatistical graphicsdesign, where clarity, transparency, accuracy and precision in data display and interpretation are essential for understanding the translation of data in a graph to clarify and interpret the science.[2][3][4][5][6][7]
Graphical perception is achieved in dimensions or steps of discernment by:
Cleveland and McGill's experiments[1]to elucidate the graphical elements humansdetectmost accurately is a fundamental component of goodstatistical graphicsdesign principles.[2][3][5][6][8][9][10][11][12]In practical terms, graphs displaying relative position on a common scale most accurately are most effective. A graph type that utilizes this element is thedot plot. Conversely, angles are perceived with less accuracy; an example is thepie chart. Humans do not naturally order color hues. Only a limited number of hues can be discriminated in one graphic.
Graphic designs that utilize visualpre-attentive processingin the graph design'sassemblyis why a picture can be worth a thousand words by using the brain's ability to perceive patterns. Not all graphs are designed to consider pre-attentive processing. For example in the attached figure, a graphic design feature, table look-up, requires the brain to work harder and take longer to decode than if the graph utilizes our ability to discern patterns.[3]
Graphic design that readily answers the scientific questions of interest will include appropriateestimation. Details for choosing the appropriate graph type for continuous andcategorical dataand for grouping have been described.[6][13]Graphics principles for accuracy, clarity and transparency have been detailed[2][3][4][14]and key elements summarized.[15]
|
https://en.wikipedia.org/wiki/Graphical_perception
|
In computersystems programming, aninterrupt handler, also known as aninterrupt service routine(ISR), is a special block of code associated with a specificinterruptcondition. Interrupt handlers are initiated by hardware interrupts, software interrupt instructions, or softwareexceptions, and are used for implementingdevice driversor transitions between protected modes of operation, such assystem calls.
The traditional form of interrupt handler is the hardware interrupt handler. Hardware interrupts arise from electrical conditions or low-level protocols implemented indigital logic, are usually dispatched via a hard-coded table of interrupt vectors, asynchronously to the normal execution stream (as interrupt masking levels permit), often using a separate stack, and automatically entering into a different execution context (privilege level) for the duration of the interrupt handler's execution. In general, hardware interrupts and their handlers are used to handle high-priority conditions that require the interruption of the current code theprocessoris executing.[1][2]
Later it was found convenient for software to be able to trigger the same mechanism by means of a software interrupt (a form of synchronous interrupt). Rather than using a hard-coded interrupt dispatch table at the hardware level, software interrupts are often implemented at theoperating systemlevel as a form ofcallback function.
Interrupt handlers have a multitude of functions, which vary based on what triggered the interrupt and the speed at which the interrupt handler completes its task. For example, pressing a key on acomputer keyboard,[1]or moving themouse, triggers interrupts that call interrupt handlers which read the key, or the mouse's position, and copy the associated information into the computer's memory.[2]
An interrupt handler is a low-level counterpart ofevent handlers. However, interrupt handlers have an unusual execution context, many harsh constraints in time and space, and their intrinsically asynchronous nature makes them notoriously difficult to debug by standard practice (reproducible test cases generally don't exist), thus demanding a specialized skillset—an important subset ofsystem programming—of software engineers who engage at the hardware interrupt layer.
Unlike other event handlers, interrupt handlers are expected to set interrupt flags to appropriate values as part of their core functionality.
Even in a CPU which supports nested interrupts, a handler is often reached with all interrupts globally masked by a CPU hardware operation. In this architecture, an interrupt handler would normally save the smallest amount of context necessary, and then reset the global interrupt disable flag at the first opportunity, to permit higher priority interrupts to interrupt the current handler. It is also important for the interrupt handler to quell the current interrupt source by some method (often toggling a flag bit of some kind in a peripheral register) so that the current interrupt isn't immediately repeated on handler exit, resulting in an infinite loop.
Exiting an interrupt handler with the interrupt system in exactly the right state under every eventuality can sometimes be an arduous and exacting task, and its mishandling is the source of many serious bugs, of the kind that halt the system completely. These bugs are sometimes intermittent, with the mishandled edge case not occurring for weeks or months of continuous operation. Formal validation of interrupt handlers is tremendously difficult, while testing typically identifies only the most frequent failure modes, thus subtle, intermittent bugs in interrupt handlers often ship to end customers.
In a modern operating system, upon entry the execution context of a hardware interrupt handler is subtle.
For reasons of performance, the handler will typically be initiated in the memory and execution context of the running process, to which it has no special connection (the interrupt is essentially usurping the running context—process time accounting will often accrue time spent handling interrupts to the interrupted process). However, unlike the interrupted process, the interrupt is usually elevated by a hard-coded CPU mechanism to a privilege level high enough to access hardware resources directly.
In a low-level microcontroller, the chip might lack protection modes and have nomemory management unit(MMU). In these chips, the execution context of an interrupt handler will be essentially the same as the interrupted program, which typically runs on a small stack of fixed size (memory resources have traditionally been extremely scant at the low end). Nested interrupts are often provided, which exacerbates stack usage. A primary constraint on the interrupt handler in this programming endeavour is to not exceed the available stack in the worst-case condition, requiring the programmer to reason globally about the stack space requirement of every implemented interrupt handler and application task.
When allocated stack space is exceeded (a condition known as astack overflow), this is not normally detected in hardware by chips of this class. If the stack is exceeded into another writable memory area, the handler will typically work as expected, but the application will fail later (sometimes much later) due to the handler's side effect of memory corruption. If the stack is exceeded into a non-writable (or protected) memory area, the failure will usually occur inside the handler itself (generally the easier case to later debug).
In the writable case, one can implement a sentinel stack guard—a fixed value right beyond the end of the legal stack whose valuecanbe overwritten, but never will be if the system operates correctly. It is common to regularly observe corruption of the stack guard with some kind of watch dog mechanism. This will catch the majority of stack overflow conditions at a point in time close to the offending operation.
In a multitasking system, each thread of execution will typically have its own stack. If no special system stack is provided for interrupts, interrupts will consume stack space from whatever thread of execution is interrupted. These designs usually contain an MMU, and the user stacks are usually configured such that stack overflow is trapped by the MMU, either as a system error (for debugging) or to remap memory to extend the space available. Memory resources at this level of microcontroller are typically far less constrained, so that stacks can be allocated with a generous safety margin.
In systems supporting high thread counts, it is better if the hardware interrupt mechanism switches the stack to a special system stack, so that none of the thread stacks need account for worst-case nested interrupt usage. Tiny CPUs as far back as the 8-bitMotorola 6809from 1978 have provided separate system and user stack pointers.
For many reasons, it is highly desired that the interrupt handler execute as briefly as possible, and it is highly discouraged (or forbidden) for a hardware interrupt to invoke potentially blocking system calls. In a system with multiple execution cores, considerations ofreentrancyare also paramount. If the system provides for hardwareDMA,concurrencyissues can arise even with only a single CPU core. (It is not uncommon for a mid-tier microcontroller to lack protection levels and an MMU, but still provide a DMA engine with many channels; in this scenario, many interrupts are typicallytriggeredby the DMA engine itself, and the associated interrupt handler is expected to tread carefully.)
A modern practice has evolved to divide hardware interrupt handlers into front-half and back-half elements. The front-half (or first level) receives the initial interrupt in the context of the running process, does the minimal work to restore the hardware to a less urgent condition (such as emptying a full receive buffer) and then marks the back-half (or second level) for execution in the near future at the appropriate scheduling priority; once invoked, the back-half operates in its own process context with fewer restrictions and completes the handler's logical operation (such as conveying the newly received data to an operating system data queue).
In several operating systems—Linux,Unix,[citation needed]macOS,Microsoft Windows,z/OS,DESQviewand some other operating systems used in the past—interrupt handlers are divided into two parts: theFirst-Level Interrupt Handler(FLIH) and theSecond-Level Interrupt Handlers(SLIH). FLIHs are also known ashard interrupt handlersorfast interrupt handlers, and SLIHs are also known asslow/soft interrupt handlers, orDeferred Procedure Callsin Windows.
A FLIH implements at minimum platform-specific interrupt handling similar tointerrupt routines. In response to an interrupt, there is acontext switch, and the code for the interrupt is loaded and executed. The job of a FLIH is to quickly service the interrupt, or to record platform-specific critical information which is only available at the time of the interrupt, andschedulethe execution of a SLIH for further long-lived interrupt handling.[2]
FLIHs causejitterin process execution. FLIHs also mask interrupts. Reducing the jitter is most important forreal-time operating systems, since they must maintain a guarantee that execution of specific code will complete within an agreed amount of time. To reduce jitter and to reduce the potential for losing data from masked interrupts, programmers attempt to minimize the execution time of a FLIH, moving as much as possible to the SLIH. With the speed of modern computers, FLIHs may implement all device and platform-dependent handling, and use a SLIH for further platform-independent long-lived handling.
FLIHs which service hardware typically mask their associated interrupt (or keep it masked as the case may be) until they complete their execution. An (unusual) FLIH which unmasks its associated interrupt before it completes is called areentrant interrupt handler. Reentrant interrupt handlers might cause astack overflowfrom multiplepreemptionsby the sameinterrupt vector, and so they are usually avoided. In apriority interruptsystem, the FLIH also (briefly) masks other interrupts of equal or lesser priority.
A SLIH completes long interrupt processing tasks similarly to a process. SLIHs either have a dedicatedkernelthread for each handler, or are executed by a pool of kernel worker threads. These threads sit on arun queuein the operating system until processor time is available for them to perform processing for the interrupt. SLIHs may have a long-lived execution time, and thus are typically scheduled similarly to threads and processes.
In Linux, FLIHs are calledupper half, and SLIHs are calledlower halforbottom half.[1][2]This is different from naming used in other Unix-like systems, where both are a part ofbottom half.[clarification needed]
|
https://en.wikipedia.org/wiki/Interrupt_handler
|
In anoptimization problem, aslack variableis a variable that is added to aninequality constraintto transform it into an equality constraint. A non-negativity constraint on the slack variable is also added.[1]: 131
Slack variables are used in particular inlinear programming. As with the other variables in the augmented constraints, the slack variable cannot take on negative values, as thesimplex algorithmrequires them to be positive or zero.[2]
Slack variables are also used in theBig M method.
By introducing the slack variables≥0{\displaystyle \mathbf {s} \geq \mathbf {0} }, the inequalityAx≤b{\displaystyle \mathbf {A} \mathbf {x} \leq \mathbf {b} }can be converted to the equationAx+s=b{\displaystyle \mathbf {A} \mathbf {x} +\mathbf {s} =\mathbf {b} }.
Slack variables give an embedding of apolytopeP↪(R≥0)f{\displaystyle P\hookrightarrow (\mathbf {R} _{\geq 0})^{f}}into the standardf-orthant, wheref{\displaystyle f}is the number of constraints (facets of the polytope). This map is one-to-one (slack variables are uniquely determined) but not onto (not all combinations can be realized), and is expressed in terms of theconstraints(linear functionals, covectors).
Slack variables aredualtogeneralized barycentric coordinates, and, dually to generalized barycentric coordinates (which are not unique but can all be realized), are uniquely determined, but cannot all be realized.
Dually, generalized barycentric coordinates express a polytope withn{\displaystyle n}vertices (dual to facets), regardless of dimension, as theimageof the standard(n−1){\displaystyle (n-1)}-simplex, which hasn{\displaystyle n}vertices – the map is onto:Δn−1↠P,{\displaystyle \Delta ^{n-1}\twoheadrightarrow P,}and expresses points in terms of thevertices(points, vectors). The map is one-to-one if and only if the polytope is a simplex, in which case the map is an isomorphism; this corresponds to a point not havinguniquegeneralized barycentric coordinates.
|
https://en.wikipedia.org/wiki/Slack_variable
|
Virtual worldsare playing an increasingly important role in education, especially inlanguage learning. By March 2007 it was estimated that over 200 universities or academic institutions were involved inSecond Life(Cooke-Plagwitz, p. 548).[1]Joe Miller, Linden Lab Vice President of Platform and Technology Development, claimed in 2009 that "Language learning is the most common education-based activity in Second Life".[2]Many mainstream language institutes and private language schools are now using 3Dvirtual environmentsto support language learning.
Virtual worldsdate back to the adventure games and simulations of the 1970s, for exampleColossal Cave Adventure, a text-only simulation in which the user communicated with the computer by typing commands at the keyboard. These early adventure games and simulations led toMUDs(Multi-user domains) andMOOs(Multi-user domains object-oriented), which language teachers were able to exploit for teaching foreign languages and intercultural understanding (Shield 2003).[3]
Three-dimensional virtual worlds such asTravelerandActive Worlds, both of which appeared in the 1990s, were the next important development.Travelerincluded the possibility of audio communication (but not text chat) between avatars represented as disembodied heads in a three-dimensional abstract landscape. Svensson (2003) describes the Virtual Wedding Project, in which advanced students of English made use ofActive Worldsas an arena for constructivist learning.[4]TheAdobe Atmospheresoftware platform was also used to promote language learning in the Babel-M project (Williams & Weetman 2003).[5]
The 3D world ofSecond Lifewas launched in 2003. Initially perceived as anotherrole-playing game(RPG), it began to attract the attention of language teachers. 2005 saw the first large-scale language school,Languagelab.com, open its doors in Second Life. By 2007, Languagelab.com's customVoIP(audio communication) solution was integrated with Second Life. Prior to that, teachers and students used separate applications for voice chat.[6]
Many universities, such as Monash University,[7]and language institutes, such asThe British Council,Confucius Institute,Instituto Cervantesand the Goethe-Institut,[8]have islands in Second Life specifically for language learning. Many professional and research organisations support virtual world language learning through their activities in Second Life.EUROCALLandCALICO, two leading professional associations that promote language learning with the aid of new technologies, maintain a joint Virtual Worlds Special Interest Group (VW SIG) and a headquarters in Second Life.[9]
Recent examples of creating sims in virtual worlds specifically for language education include VIRTLANTIS, which has been a free resource for language learners and teachers and an active community of practice since 2006,[10]the EU-funded NIFLAR project,[11]the EU-funded AVALON project,[12]and the EduNation Islands, which have been set up as a community of educators aiming to provide information about and facilities for language learning and teaching.[13]NIFLAR is implemented both in Second Life and inOpenSim.[14]Numerous other examples are described by Molka-Danielsen & Deutschmann (2009),[15]and Walker, Davies & Hewer (2012).[16]
Since 2007 a series of conferences known as SLanguages have taken place, bringing together practitioners and researchers in the field of language education in Second Life for a 24-hour event to celebrate languages and cultures within the 3D virtual world.[17]
With the decline of second life due to increasing support for open source platforms[18]many independent language learning grids such as English Grid[19]and Chatterdale[20]have emerged.
Almost all virtual world educational projects envisage ablended learningapproach whereby the language learners are exposed to a 3D virtual environment for a specific activity or time period. Such approaches may combine the use of virtual worlds with other online and offline tools, such as 2D virtual learning environments (e.g.Moodle) or physical classrooms. SLOODLE. for example, is an open-source project which integrates the multi-user virtual environments of Second Life and/orOpenSimwith the Moodle learning-management system.[21]Some language schools offer a complete language learning environment through a virtual world, e.g.Languagelab.comandAvatar Languages.
Virtual worlds such as Second Life are used for theimmersive,[22]collaborative[23]and task-based, game-like[24]opportunities they offer language learners. As such, virtual world language learning can be considered to offer distinct (although combinable) learning experiences.
The "Six learnings framework" is a pedagogical outline developed for virtual world education in general. It sets out six possible ways to view an educational activity.[28]
3D virtual worlds are often used forconstructivistlearning because of the opportunities for learners to explore, collaborate and be immersed within an environment of their choice. Some virtual worlds allow users to build objects and to change the appearance of their avatar and of their surroundings.[31]Constructivist approaches such astask-based language learningandDogmeare applied to virtual world language learning because of the scope for learners to socially co-construct knowledge, in spheres of particular relevance to the learner.
Task-based language learning(TBLL) has been commonly applied to virtual world language education. Task-based language learning focuses on the use of authentic language and encourages students to do real life tasks using the language being learned.[32]Tasks can be highly transactional, where the student is carrying out everyday tasks such as visiting the doctor at the Chinese Island of Monash University in Second Life. Incidental knowledge about the medical system in China and cultural information can also be gained at the same time.[33]
Other tasks may focus on more interactional language, such as those that involve more social activities or interviews within a virtual world.
Dogme language teachingis an approach that is essentially communicative, focusing mainly on conversation between learners and teacher rather than conventional textbooks. Although Dogme is perceived by some teachers as being anti-technology, it nevertheless appears to be particularly relevant to virtual world language learning because of the social, immersive and creative experiences offered by virtual worlds and the opportunities they offer for authentic communication and a learner-centred approach.[34]
Virtual world WebQuests (also referred to as SurReal Quests[35]) combine the concept of 2D WebQuests with the immersive and social experiences of 3D virtual worlds. Learners develop texts, audios or podcasts based on their research, part of which is within a virtual world.
The concept of real-lifelanguage villageshas been replicated within virtual worlds to create a language immersion environment for language learners in their own country.[36]The Dutch Digitale School has built two virtual language villages, Chatterdale (English) and Parolay (French), for secondary education students on the OpenSim grid.[37]
Hundsberger (2009, p. 18)[38]defines a virtual classroom thus:
"A virtual classroom in SL sets itself apart from other virtual classrooms in that an ordinary classroom is the place to learn a language whereas the SL virtual classroom is the place to practise a language. The connection to the outside world from a language lab is a 2D connection, but increasingly people enjoy rich and dynamic 3D environments such as SL as can be concluded from the high number of UK universities active in SL."
To what extent a virtual classroom should offer only language practice rather than teaching a language as in a real-life classroom is a matter for debate. Hundsberger's view (p. 18) is that "[...] SL classrooms are not viewed as a replacement for real life classrooms. SL classrooms are an additional tool to be used by the teacher/learner."
Language learning can take place in public spaces within virtual worlds. This offers greater flexibility with locations and students can choose the locations themselves, which enables a more constructivist approach.
The wide variety of replica places in Second Life, e.g. Barcelona, Berlin, London and Paris, offers opportunities for language learning throughvirtual tourism. Students can engage in conversation with native speakers who people these places, take part in conducted tours in different languages and even learn how to use Second Life in a language other than English.
The Hypergrid Adventurers Club is an open group of explorers who discuss and visit many different OpenSim virtual worlds. By usinghypergridconnectivity, avatars can jump between completely different OpenSim grids while maintaining a singular identity and inventory.[39]
The TAFE NSW-Western Institute Virtual Tourism Project commenced in 2010 and was funded by the Australian Flexible Learning Framework's eLearning Innovations Project. It is focused on developing virtual worlds learning experiences for TVET Tourism students and located on the joycadiaGrid.[40]
Virtual worlds offer exceptional opportunities forautonomous learning. The videoLanguage learning in Second Life: an Introductionby Helen Myers (Karelia Kondor in SL) is a good illustration of an adult learner's experiences of her introduction to SL and in learning Italian.[41]
Tandem learning, or buddy learning, takes autonomous learning one step further. This form of learning involves two people with different native languages working together as a pair in order to help one another to improve their language skills.[42]Each partner helps the other through explanations in the foreign language. As this form of learning is based on communication between members of different language communities and cultures, it also facilitatesintercultural learning. A tandem learning group, Teach You Teach Me (Language Buddies), can be found in Second Life.
The termholodeckderives from theStar TrekTV series and feature films, in which a holodeck is depicted as an enclosed room in which simulations can be created for training or entertainment. Holodecks offer exciting possibilities of calling up a range of instantly available simulations that can be used for entertainment, presentations, conferencing and, of course, teaching and learning. For example, if students of hospitality studies are being introduced to the language used in checking in at a hotel a simulation of a hotel reception area can be generated instantly by selecting the chosen simulation from a holodeck "rezzer", a device that stores and generates different scenarios. Holodecks can also be used to encourage students to describe a scene or to even build a scene.[43]Holodecks are commonly used for a range of role-plays.[44]
Acave automatic virtual environment(CAVE) is an immersive virtual reality (VR) environment where projectors are directed to three, four, five or six of the walls of a room-sized cube. The CAVE is a large theatre that sits in a larger room. The walls of the CAVE are made up of rear-projection screens, and the floor is made of a down-projection screen. High-resolution projectors display images on each of the screens by projecting the images onto mirrors which reflect the images onto the projection screens. The user will go inside the CAVE wearing special glasses to allow the 3D graphics that are generated by the CAVE to be seen. With these glasses, people using the CAVE can actually see objects floating in the air, and can walk around them, getting a realistic view of what the object would look like when they walk around it.
O'Brien, Levy & Orich (2009) describe the viability of CAVE and PC technology as environments for assisting students to learn a foreign language and to experience the target culture in ways that are impossible through the use of other technologies.[45]
Immersion brought by virtual worlds is augmented withartificial intelligencecapabilities for language learning. Learners can interact with the agents in the scene using speech and gestures. Dialogue interactions with automatic interlocutors provide a language learner with access to authentic and immersive conversations to role-play and learn viatask-based language learningin a new immersive classroom that uses AI and VR.[46][47]
Earlier virtual worlds, with the exception ofTraveler(1996), offered only text chat. Voice chat was a later addition.[48]Second Life did not introduce voice capabilities until 2007. Prior to this, independentVoIPsystems, e.g.Ventrilo, were used. Second Life's current internal voice system has the added ability to reproduce the effect of distance on voice loudness, so that there is an auditory sense of space amongst users.[6]
Other virtual worlds, such asTwinity, also offer internal voice systems. Browser-based 3D virtual environments tend to only offer text-chat communication, although voice chat seems likely to become more widespread.[49]Vivox[50]is one of the leading integrated voice platform for the social web, providing a Voice Toolbar for developers of virtual worlds and multiplayer games. Vivox is now spreading into OpenSim at an impressive rate, e.g. Avination is offering in-world Vivox voice at no charge to its residents and region renters, as well as to customers who host private grids with the company.[51]English Grid began offering language learning and voice chat for language learners using Vivox in May, 2012.[52]
The advent of voice chat in Second Life in 2007 was a major breakthrough. Communicating with one's voice is thesine qua nonof language learning and teaching, but voice chat is not without its problems. Many Second Life users report on difficulties with voice chat, e.g. the sound being too soft, too loud or non-existent – or continually breaking up. This may be due to glitches in the Second Life software itself, but it is often due to individual users' poor understanding of how to set up audio on their computers and/or of inadequate bandwidth. A separate voice chat channel outside Second Life, e.g.Skype, may in such cases offer a solution.
Owning or renting land in a virtual world is necessary for educators who wish to create learning environments for their students. Educators can then use the land to create permanent structures or temporary structures embedded withinholodecks, for example the EduNation Islands in Second Life.[13]The land can also be used for students undertaking building activities. Students may also use public sandboxes, but they may prefer to exhibit their creations more permanently on owned or rented land.
Some language teaching projects, for example NIFLAR, may be implemented both in Second Life and inOpenSim.[14]
The Immersive Education Initiative revealed (October 2010) that it would provide free permanent virtual world land in OpenSim for one year to every school and non-profit organization that has at least one teacher, administrator, or student in attendance of any Immersive Education Initiative Summit.[53]
Many islands in Second Life have language- or culture-specific communities that offer language learners easy ways to practise a foreign language.[54]Second Life is the widest-used 3D world among members of the language teaching community, but there are many alternatives. General-purpose virtual environments such as Hangout and browser-based 3D environments such as ExitReality and 3DXplorer offer 3D spaces for social learning, which may also include language learning.Google Street ViewandGoogle Earth[55]also have a role to play in language learning and teaching.
Twinityreplicates the real life cities of Berlin, Singapore, London and Miami, and offers language learners virtual locations with specific languages being spoken. Zon has been created specifically for learners of Chinese.[56]English Grid[57]has been developed by education and training professionals as a research platform for delivering English language instruction using opensim.
OpenSimis employed as free open source standalone software, thus enabling a decentralized configuration of all educators, trainers, and users. Scott Provost, Director at the Free Open University, Washington DC, writes: "The advantage of Standalone is that Asset server and Inventory server are local on the same server and well connected to your sim. With Grids that is never the case. With Grids/Clouds that is never the case. On OSGrid with 5,000 regions and hundreds of users scalability problems are unavoidable. We plan on proposing 130,000 Standalone mega regions (in US schools) with Extended UPnP Hypergrid services. The extended services would include a suitcase or limited assets that would be live on the client".[58]Such a standalone sim offers 180,000 prims for building, and can be distributed pre-configured together with a virtual world viewer using a USB storage stick or SD card. Pre-configured female and male avatars can also be stored on the stick, or even full-sim builds can be downloaded for targeted audiences without virtual world experience. This is favorable for introductory users who want a sandbox on demand and have no clue how to get started.
There is no shortage of choices of virtual world platforms. The following lists describe a variety of different virtual world platforms, their features and their target audiences:
Virtual World Language Learning is a rapidly expanding field and it converges with other closely related areas, such as the use of MMOGs, SIEs and Augmented Reality Language Learning (ARLL).
MMOGs (massively multiplayer online games) are also used to support language learning, for example the World of Warcraft in School project.[68]
SIEs are engineered 3D virtual spaces that integrate online gaming aspects. They are specifically designed for educational purposes and offer learners a collaborative and constructionist environment. They also allow the creators/designers to focus on specific skills and pedagogical objectives.[69]
Augmented reality(AR) is the combination of real-world and computer-generated data so that computer generated objects are blended into real time projection of real life activities. Mobile AR applications enable immersive and information-rich experiences in the real world and are therefore blurring the differences between real life and virtual worlds. This has important implications for m-Learning (Mobile Assisted Language Learning), but hard evidence on how AR is used in language learning and teaching is difficult to come by.[70]
The main aim is to promote social integration among users located in the same physical space, so that multiple users may access to a shared space which is populated by virtual objects while remaining grounded in the real world. In other words, it means:
|
https://en.wikipedia.org/wiki/Virtual_world_language_learning
|
Innatural language processing, asentence embeddingis a representation of a sentence as avectorof numbers which encodes meaningful semantic information.[1][2][3][4][5][6][7]
State of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models.BERTpioneered an approach involving the use of a dedicated [CLS] token prepended to the beginning of each sentence inputted into the model; the final hidden state vector of this token encodes information about the sentence and can be fine-tuned for use in sentence classification tasks. In practice however, BERT's sentence embedding with the [CLS] token achieves poor performance, often worse than simply averaging non-contextual word embeddings.SBERTlater achieved superior sentence embedding performance[8]by fine tuning BERT's [CLS] token embeddings through the usage of asiamese neural networkarchitecture on the SNLI dataset.
Other approaches are loosely based on the idea ofdistributional semanticsapplied to sentences.Skip-Thoughttrains an encoder-decoder structure for the task of neighboring sentences predictions; this has been shown to achieve worse performance than approaches such asInferSentor SBERT.
An alternative direction is to aggregate word embeddings, such as those returned byWord2vec, into sentence embeddings. The most straightforward approach is to simply compute the average of word vectors, known as continuous bag-of-words (CBOW).[9]However, more elaborate solutions based on word vector quantization have also been proposed. One such approach is the vector of locally aggregated word embeddings (VLAWE),[10]which demonstrated performance improvements in downstream text classification tasks.
In recent years, sentence embedding has seen a growing level of interest due to its applications in natural language queryable knowledge bases through the usage of vector indexing for semantic search.LangChainfor instance utilizes sentence transformers for purposes of indexing documents. In particular, an indexing is generated by generating embeddings for chunks of documents and storing (document chunk, embedding) tuples. Then given a query in natural language, the embedding for the query can be generated. A top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant document chunks as context information forquestion answeringtasks. This approach is also known formally asretrieval-augmented generation[11]
Though not as predominant as BERTScore, sentence embeddings are commonly used for sentence similarity evaluation which sees common use for the task of optimizing aLarge language model's generation parameters is often performed via comparing candidate sentences against reference sentences. By using the cosine-similarity of the sentence embeddings of candidate and reference sentences as the evaluation function, a grid-search algorithm can be utilized to automatehyperparameter optimization[citation needed].
A way of testing sentence encodings is to apply them on Sentences Involving Compositional Knowledge (SICK) corpus[12]for both entailment (SICK-E) and relatedness (SICK-R).
In[13]the best results are obtained using aBiLSTM networktrained on theStanford Natural Language Inference (SNLI) Corpus. ThePearson correlation coefficientfor SICK-R is 0.885 and the result for SICK-E is 86.3. A slight improvement over previous scores is presented in:[14]SICK-R: 0.888 and SICK-E: 87.8 using a concatenation of bidirectionalGated recurrent unit.
|
https://en.wikipedia.org/wiki/Sentence_embedding
|
Meiko Scientific Ltd.was a Britishsupercomputercompany based inBristol, founded by members of the design team working on theInmostransputermicroprocessor.
In 1985, when Inmos management suggested the release of the transputer be delayed, Miles Chesney, David Alden, Eric Barton, Roy Bottomley, James Cownie, and Gerry Talbot resigned and formed Meiko (Japanesefor "well-engineered") to start work onmassively parallelmachines based on the processor. Nine weeks later in July 1985, they demonstrated a transputer system based on experimental16-bittransputers at theSIGGRAPHin San Francisco.
In 1986, a system based on32-bitT414 transputers was launched as theMeiko Computing Surface. By 1990, Meiko had sold more than 300 systems and grown to 125 employees. In 1993, Meiko launched the second-generationMeiko CS-2system, but the company ran into financial difficulties in the mid-1990s. The technical team and technology was transferred to a joint venture company namedQuadrics Supercomputers World Ltd.(QSW), formed byAlenia SpazioofItalyin mid-1996. At Quadrics, the CS-2 interconnect technology was developed intoQsNet.
As of 2021[update], a vestigial Meiko website still exists.[1]
The Meiko Computing Surface (sometimes retrospectively referred to as the CS-1) was amassively parallelsupercomputer. The system was based on theInmostransputermicroprocessor, later also usingSPARCandIntel i860processors.[2][3]
The Computing Surface architecture comprised multiple boards containing transputers connected together by their communications links via Meiko-designed link switch chips. A variety of different boards were produced with different transputer variants,random-access memory(RAM) capacities and peripherals.
The initial software environments provided for the Computing Surface wasOccamProgramming System(OPS), Meiko's version of Inmos's D700 Transputer Development System. This was soon superseded by amulti-userversion,MultiOPS. Later, Meiko introducedMeiko Multiple Virtual Computing Surfaces(M²VCS), a multi-user resource management system let the processors of a Computing Surface be partitioned into severaldomainsof different sizes. These domains were allocated by M²VCS to individual users, thus allowing several simultaneous users access to their own virtual Computing Surfaces. M²VCS was used in conjunction with either OPS orMeikOS, aUnix-likesingle-processoroperating system.
In 1988, Meiko launched the In-Sun Computing Surface, which repackaged the Computing Surface intoVMEbusboards (designated the MK200 series) suitable for installation in largerSun-3orSun-4systems. The Sun acted asfront-endhost system for managing the transputers, running development tools and providing mass storage. A version of M²VCS running as aSunOSdaemonnamedSun Virtual Computing Surfaces(SVCS) provided access between the transputer network and the Sun host.
As the performance of the transputer became less competitive toward the end of the 1980s (the follow-on T9000 transputer being beset with delays), Meiko added the ability to supplement the transputers with Intel i860 processors. Each i860 board (MK086 or MK096) contained two i860s with up to 32 MB of RAM each, and two T800s providing inter-processor communication. Sometimes known as the Concerto or simply the i860 Computing Surface, these systems had limited success.
Meiko also produced a SPARC processor board, the MK083, which allowed the integration of theSunOSoperating system into the Computing Surface architecture, similarly to the In-Sun Computing Surface. These were usually used as front-end host processors for transputer or i860 Computing Surfaces. SVCS, or an improved version, called simplyVCSwas used to manage the transputer resources. Computing Surface configurations with multiple MK083 boards were also possible.
A major drawback of the Computing Surface architecture was poorI/Obandwidthfor general data shuffling. Although aggregate bandwidth for special case data shuffling could be very high, the general case has very poor performance relative to the compute bandwidth. This made the Meiko Computing Surface uneconomic for many applications.
MeikOS (also written asMeikosorMEiKOS) is aUnix-liketransputeroperating systemdeveloped for the Computing Surface during the late 1980s.
MeikOS was derived from an early version ofMinix, extensively modified for the Computing Surface architecture. UnlikeHeliOS, another Unix-like transputer operating system, MeikOS is essentially a single-processor operating system with a distributedfile system. MeikOS was intended for use with theMeiko Multiple Virtual Computing Surfaces(M²VCS) resource management software, which partitions the processors of a Computing Surface intodomains, manages user access to these domains, and provides inter-domain communication.
MeikOS hasdisklessandfileservervariants, the former running on the seat processor of an M²VCS domain, providing acommand lineuser interface for a given user; the latter running on processors with attachedSCSIhard disks, providing a remote file service (namedSurface File System(SFS)) to instances of diskless MeikOS. The two can communicate via M²VCS.
MeikOS was made obsolete by the introduction of the In-Sun Computing Surface and the Meiko MK083SPARCprocessor board, which allowSunOSandSun Virtual Computing Surfaces(SVCS), later developed asVCSto take over the roles of MeikOS and M²VCS respectively. The last MeikOS release was MeikOS 3.06, in early 1991.
This was based on thetransputerlink protocol. Meiko developed its own switch silicon on and European Silicon Systems, ES2gate array. Thisapplication-specific integrated circuit(ASIC) provided static connectivity and limited dynamic connectivity and was designed by Moray McLaren.
The CS-2[4][5][6]was launched in 1993 and was Meiko's second-generation system architecture, superseding the earlier Computing Surface.
The CS-2 was an all-new modular architecture based aroundSuperSPARCorhyperSPARCprocessors[7]and, optionally,FujitsuμVPvector processors.[8]These implemented an instruction set similar to theFujitsu VP2000vector supercomputer and had a nominal performance of 200megaflopsondouble precisionarithmetic and double that onsingle precision. The SuperSPARC processors ran at 40 MHz initially, later increased to 50 MHz. Subsequently, hyperSPARC processors were introduced at 66, 90 or 100 MHz. The CS-2 was intended to scale up to 1024 processors. The largest CS-2 system built was a 224-processor system[9]installed atLawrence Livermore National Laboratory.
The CS-2 ran a customized version of Sun's operating systemSolaris, initially Solaris 2.1, later 2.3 and 2.5.1.
The processors in a CS-2 were connected by a Meiko-designed multi-stage packet-switchedfat treenetwork implemented in custom silicon.[10][11][12]
This project, codenamed Elan-Elite, was started in 1990, as a speculative project to compete with the T9000TransputerfromInmos, which Meiko intended to use as an interconnect technology. TheT9000began to suffer massive delays, such that the internal project became the only viable interconnect choice for the CS-2.
This interconnect comprised two devices, code-namedElan(adapter) andElite(switch). Each processing element included an Elan chip, a communications co-processor based on theSPARCarchitecture, accessed via aSun MBuscache coherentinterface and providing two 50 MB/s bi-directional links. The Elite chip was an 8-way linkcrossbar switch, used to form thepacket-switched network. The switch had limited adaption based on load and priority.[13]
Both ASICs were fabbed in complementary metal–oxide–semiconductor (CMOS) gate arrays byGEC Plesseyin theirRoborough,Plymouthsemi-conductor fab in 1993.
After the Meiko technology was acquired byQuadrics, the Elan/Elite interconnect technology was developed intoQsNet.
Meiko had hired Fred (Mark) Homewood and Moray McLaren both of whom had been instrumental in the design of theT800. Together, they designed and developed an improved, higher performanceFPUcore, owned by Meiko. This was initially targeted at theIntel80387instruction set. An ongoing legal battle between Intel,AMDand others over the 80387 made it clear this project was a commercial non-starter. A chance discussion between McLaren andAndy Bechtolsheimwhile visitingSun Microsystemsto discuss licensingSolariscaused Meiko to re-target the design forSPARC. Meiko was able to turn around the coreFPUdesign in a short time andLSI Logicfabbed a device for theSPARCstation 1.
A major difference over the T800 FPU was that it fully implemented theIEEE 754 standardfor computer arithmetic. This including all rounding modes, denormalised numbers and square root in hardware without taking anyhardware exceptionsto complete computation.
ASPARCstation 2design was also developed together with a combined part targeting the SPARCstation 2 ASIC pinout. LSI fabbed and manufactured the separate FPU L64814, as part of their SparKIT chipset.[14]
The Meiko design was eventually fully licensed to Sun which went on to use it in theMicroSPARCfamily of ASICs for several generations[15]in return for a one-off payment and full Solaris source license.
|
https://en.wikipedia.org/wiki/Meiko_Computing_Surface
|
XLDB(eXtremelyLargeDataBases) was a yearly conference aboutdatabases,data managementandanalyticsheld from 2007 to 2019. The definition ofextremely largerefers to data sets that are too big in terms of volume (too much), and/or velocity (too fast), and/or variety (too many places, too many formats) to be handled using conventional solutions. This conference dealt with the high-end ofvery large databases(VLDB). It was conceived and chaired by Jacek Becla.
In October 2007, data experts gathered atSLAC National Accelerator Labfor theFirst Workshop on Extremely Large Databases. As a result, the XLDB research community was formed to meet the rapidly growing demands of the largest data systems. In addition to the original invitational workshop, an open conference, tutorials, and annual satellite events on different continents were added. The main event, held annually atStanford Universitygathers over 300 attendees. XLDB is one of the data systems events catering to both academic and industry communities. For 2009, the workshop was co-located withVLDB 2009in France to reach out to non-US research communities.[1]XLDB 2019 followed Stanford's Conference on Systems and Machine Learning (SysML).[2]
The main goals of this community include:[3]
As of 2013, the community consisted of over one thousand members including:
The community met annually atStanford Universitythrough 2019. Occasional satellite events were held inAsiaandEurope.
A detailed report or videos was produced after each workshop.
XLDB events led to initiating an effort to build a new open source, science database calledSciDB.[4]
The XLDB organizers started defining ascience benchmarkfor scientific data management systems called SS-DB.
AtXLDB 2012the XLDB organizers announced that two major databases that support arrays asfirst-class objects(MonetDBSciQL andSciDB) have formed a working group in conjunction with XLDB. This working group is proposing a common syntax (provisionally named “ArrayQL”) for manipulating arrays, including array creation and query.
|
https://en.wikipedia.org/wiki/XLDB
|
Sphere packing in a cylinderis a three-dimensionalpacking problemwith the objective of packing a given number of identicalspheresinside acylinderof specified diameter and length. For cylinders with diameters on the same order of magnitude as the spheres, such packings result in what are calledcolumnar structures.
These problems are studied extensively in the context ofbiology,nanoscience,materials science, and so forth due to the analogous assembly of small particles (likecellsandatoms) into cylindricalcrystalline structures.
The book "Columnar Structures of Spheres: Fundamentals and Applications"[1]serves as a notable contributions to this field of study. Authored by Winkelmann and Chan, the book reviews theoretical foundations and practical applications of densely packed spheres within cylindrical confinements.
Columnar structures appear in various research fields on a broad range of length scales from metres down to the nanoscale. On the largest scale, such structures can be found inbotanywhere seeds of a plant assemble around the stem. On a smaller scale bubbles of equal size crystallise to columnarfoamstructures when confined in a glass tube. Innanosciencesuch structures can be found in man-made objects which are on length scales from a micron to the nanoscale.
Columnar structures were first studied in botany due to their diverse appearances in plants.[2]D'Arcy Thompsonanalysed such arrangement of plant parts around the stem in his book "On Growth and Form" (1917). But they are also of interest in other biological areas, including bacteria,[3]viruses,[4]microtubules,[5]and thenotochordof thezebra fish.[6]
One of the largest flowers where the berries arrange in a regular cylindrical form is thetitan arum. This flower can be up to 3m in height and is natively solely found in western Sumatra and western Java.
On smaller length scales, the berries of theArum maculatumform a columnar structure in autumn. Its berries are similar to that of the corpse flower, since the titan arum is its larger relative. However, the cuckoo-pint is much smaller in height (height ≈ 20 cm). The berry arrangement varies with the stem to berry size.
Another plant that can be found in many gardens of residential areas is theAustralian bottlebrush. It assembles its seed capsules around a branch of the plant. The structure depends on the seed capsule size to branch size.
A further occurrence of ordered columnar arrangement on the macroscale arefoamstructures confined inside a glass tube. They can be realised experimentally with equal-sized soap bubbles inside a glass tube, produced by blowing air of constant gas flow through a needle dipped in a surfactant solution.[7]By putting the resulting foam column under forced drainage (feeding it with surfactant solution from the top), the foam can be adjusted to either a dry (bubbles shaped aspolyhedrons) or wet (spherical bubbles) structure.[8]
Due to this simple experimental set-up, many columnar structures have been discovered and investigated in the context of foams with experiments as well as simulation. Many simulations have been carried out using theSurface Evolverto investigate dry structure or thehard sphere modelfor the wet limit where the bubbles are spherical.
In the zigzag structure the bubbles are stacked on top of each other in a continuous w-shape. For this particular structure a moving interface with increasing liquid fraction was reported by Hutzleret al.in 1997.[9]This included an unexpected 180° twist interface, whose explanation is still lacking.
The first experimental observation of aline-slip structurewas discovered by Winkelmannet al.in a system of bubbles.[10]
Further discovered structures include complex structures with internal spheres/foam cells. Some dry foam structures with interior cells were found to consist of a chain of pentagonaldodecahedraorKelvin cellsin the centre of the tube.[11]For many more arrangements of this type, it was observed that the outside bubble layer is ordered, with each internal layer resembling a different, simpler columnar structure by usingX-ray tomography.[7]
Columnar structures have also been studied intensively in the context ofnanotubes. Their physical or chemical properties can be altered by trapping identical particles inside them.[12][13][14]These are usually done by self-assembling fullerenes such asC60, C70, or C78 into carbon nanotubes,[12]but also boron nitride nanotubes[15]
Such structures also assemble when particles are coated on the surface of a spherocylinder as in the context of pharmaceutical research. Lazároet al.examined the morphologies of virus capsid proteins self-assembled around metal nanorods.[16]Drug particles were coated as densely as possible on a spherocylinder to provide the best medical treatment.
Wuet al.built rods of the size of several microns. These microrods are created by densely packing silica colloidal particles inside cylindrical pores. By solidifying the assembled structures the microrods were imaged and examined using scanning electron microscopy (SEM).[17]
Columnar arrangements are also investigated as a possible candidate ofoptical metamaterials(i.e. materials with a negative refractive index) which find applications in super lenses[18]or optical cloaking.[19]Tanjeemet al.are constructing such a resonator by self-assembling nanospheres on the surface of the cylinder.[20][21]The nanospheres are suspended in anSDSsolution together with a cylinder of diameterD{\textstyle D}, much larger than the diameter of the nanospheresd{\displaystyle d}(D/d≈3to5{\textstyle D/d\approx 3{\text{ to }}5}). The nanospheres then stick to the surface of the cylinders by adepletion force.
The most common way of classifyingorderedcolumnar structures uses thephyllotactic notation, adopted from botany. It is used to describe arrangements of leaves of a plant, pine cones, or pineapples, but also planar patterns of florets in a sunflower head. While the arrangement in the former are cylindrical, the spirals in the latter are arranged on a disk. For columnar structures phyllotaxis in the context of cylindrical structures is adopted.
The phyllotactic notation describes such structures by a triplet of positive integers(l=m+n,m,n){\displaystyle (l=m+n,m,n)}withl≥m≥n{\textstyle l\geq m\geq n}. Each numberl{\textstyle l},m{\displaystyle m}, andn{\textstyle n}describes a family of spirals in the 3-dimensional packing. They count the number of spirals in each direction until the spiral repeats. This notation, however, only applies to triangular lattices and is therefore restricted to the ordered structures without internal spheres.
Ordered columnar structures without internal spheres are categorised into two separate classes:uniformandline-slipstructures. For each structure that can be identified with the triplet(l,m,n){\textstyle (l,m,n)}, there exist a uniform structure and at least one line slip.
A uniform structure is identified by each sphere having the same number of contacting neighbours.[22][1]This gives each sphere an identical neighbourhood. In the example image on the side each sphere has six neighbouring contacts.
The number of contacts is best visualised in the rolled-out contact network. It is created by rolling out the contact network into a plane of heightz{\textstyle z}and azimuthal angleθ{\textstyle \theta }of each sphere. For a uniform structure such as the one in the example image, this leads to a regularhexagonal lattice. Each dot in this pattern represents a sphere of the packing and each line a contact between adjacent spheres.
For all uniform structures above a diameter ratio ofD/d>2.0{\displaystyle D/d>2.0}, the regular hexagonal lattice is its characterising feature since this lattice type has the maximum number of contacts.[22][1]For different uniform structures(l,m,n){\displaystyle (l,m,n)}the rolled-out contact pattern only varies by a rotation in thez-θ{\textstyle z{\text{-}}\theta }plane. Each uniform structure is thus distinguished by its periodicity vectorV{\textstyle V}, which is defined by the phyllotactic triplet(l,m,n){\displaystyle (l,m,n)}.
For each uniform structure, there also exists a related but different structure, called a line-slip arrangement.[22][1]
The differences between uniform and line-slip structures are marginal and difficult to spot from images of the sphere packings. However, by comparing their rolled-out contact networks, one can spot that certain lines (which represent contacts) are missing.
All spheres in a uniform structure have the same number of contacts, but the number of contacts for spheres in a line slip may differ from sphere to sphere. For the example line slip in the image on the right side, some spheres count five and others six contacts. Thus a line slip structure is characterised by these gaps or loss of contacts.
Such a structure is termed line slip because the losses of contacts occur along a line in the rolled-out contact network. It was first identified by Picketet al., but not termed line slip.[23]
The direction, in which the loss of contacts occur can be denoted in the phyllotactic notation(l,m,n){\textstyle (l,m,n)}, since each number represents one of the lattice vectors in the hexagonal lattice.[22][1]This is usually indicated by a bold number.
By shearing the row of spheres below the loss of contact against a row above the loss of contact, one can regenerate two uniform structures related to this line slip. Thus, each line slip is related to two adjacent uniform structures, one at a higher and one at a lower diameter ratioD/d{\textstyle D/d}.[22][1][24]
Winkelmannet al.were the first to experimentally realise such a structure using soap bubbles in a system of deformable spheres.[10]
Columnar structures arise naturally in the context of dense hard sphere packings inside a cylinder. Mughalet al.studied such packings usingsimulated annealingup to the diameter ratio ofD/d=2.873{\textstyle D/d=2.873}for cylinder diameterD{\textstyle D}to sphere diameterd{\textstyle d}.[24]This includes some structures with internal spheres that are not in contact with the cylinder wall.
They calculated the packing fraction for all these structures as a function of the diameter ratio. At the peaks of this curve lie the uniform structures. In-between these discrete diameter ratios are the line slips at a lower packing density. Their packing fraction is significantly smaller than that of an unconfined lattice packing such asfcc, bcc, or hcp due to the free volume left by the cylindrical confinement.
The rich variety of such ordered structures can also be obtained by sequential depositioning the spheres into the cylinder.[25]Chan reproduced all dense sphere packings up toD/d<2.7013{\textstyle D/d<2.7013}using an algorithm, in which the spheres are placed sequentially dropped inside the cylinder.
Mughalet al.also discovered that such structures can be related to disk packings on a surface of a cylinder.[24]The contact network of both packings are identical. For both packing types, it was found that different uniform structures are connected with each other by line slips.[24]
Fuet al.extended this work to higher diameter ratiosD/d<4.0{\textstyle D/d<4.0}usinglinear programmingand discovered 17 new dense structures with internal spheres that are not in contact with the cylinder wall.[26]
A similar variety of dense crystalline structures have also been discovered for columnar packings ofspheroidsthroughMonte Carlo simulations.[27]Such packings include achiral structures with specific spheroid orientations and chiral helical structures with rotating spheroid orientations.
A further dynamic method to assemble such structures was introduced by Leeet al.[28]Here, polymeric beads are placed together with a fluid of higher density inside a rotatinglathe.
When the lathe is static, the beads float on top of the liquid. With increasing rotational speed, thecentripetal forcethen pushes the fluidoutwardsand the beadstowardthe central axis. Hence, the beads are essentially confined by a potential given by therotational energyErot=12mR2ω2,{\displaystyle E_{\text{rot}}={\frac {1}{2}}mR^{2}\omega ^{2},}wherem{\textstyle m}is the mass of the beads,R{\textstyle R}the distance from the central axis, andω{\textstyle \omega }the rotational speed. Due to theR2{\textstyle R^{2}}proportionality, the confining potential resembles that of a cylindricalharmonic oscillator.
Depending on number of spheres and rotational speed, a variety of ordered structures that are comparable to the dense sphere packings were discovered.
A comprehensive theory to this experiment was developed by Winkelmannet al.[29]It is based on analytic energy calculations using a generic sphere model and predictsperitectoidstructure transitions.
|
https://en.wikipedia.org/wiki/Cylinder_sphere_packing
|
In most computerprogramming languages, awhile loopis acontrol flowstatementthat allows code to be executed repeatedly based on a givenBooleancondition. Thewhileloop can be thought of as a repeatingif statement.
Thewhileconstruct consists of a block of code and a condition/expression.[1]The condition/expression is evaluated, and if the condition/expression istrue,[1]the code within all of their following in the block is executed. This repeats until the condition/expression becomesfalse. Because thewhileloop checks the condition/expression before the block is executed, the control structure is often also known as apre-test loop. Compare this with thedo whileloop, which tests the condition/expressionafterthe loop has executed.
For example, in the languagesC,Java,C#,[2]Objective-C, andC++, (whichuse the same syntaxin this case), the code fragment
first checks whether x is less than 5, which it is, so then the {loop body} is entered, where theprintffunction is run and x is incremented by 1. After completing all the statements in the loop body, the condition, (x < 5), is checked again, and the loop is executed again, this process repeating until thevariablex has the value 5.
It is possible, and in some cases desirable, for the condition toalwaysevaluate to true, creating aninfinite loop. When such a loop is created intentionally, there is usually another control structure (such as abreakstatement) that controls termination of the loop.
For example:
Thesewhileloops will calculate thefactorialof the number 5:
or simply
Go has nowhilestatement, but it has the function of aforstatement when omitting some elements of theforstatement.
The code for the loop is the same for Java, C# and D:
Non-terminating while loop:
Pascal has two forms of the while loop,whileandrepeat. While repeats one statement (unless enclosed in a begin-end block) as long as the condition is true. The repeat statement repetitively executes a block of one or more statements through anuntilstatement and continues repeating unless the condition is false. The main difference between the two is the while loop may execute zero times if the condition is initially false, the repeat-until loop always executes at least once.
Whileloops are frequently used for reading data line by line (as defined by the$/line separator) from open filehandles:
Non-terminating while loop:
In Racket, as in otherSchemeimplementations, anamed-letis a popular way to implement loops:
Using a macro system, implementing awhileloop is a trivial exercise (commonly used to introduce macros):
However, an imperative programming style is often discouraged in Scheme and Racket.
Contrary to other languages, in Smalltalk awhileloop is not alanguage constructbut defined in the classBlockClosureas a method with one parameter, the body as aclosure, using self as the condition.
Smalltalk also has a corresponding whileFalse: method.
While[3]is a simple programming language constructed from assignments, sequential composition, conditionals, and while statements, used in the theoretical analysis of imperative programming languagesemantics.[4][5]
|
https://en.wikipedia.org/wiki/While_loop
|
The term "computer", in use from the early 17th century (the first known written reference dates from 1613),[1]meant "one who computes": a person performing mathematicalcalculations, beforeelectronic calculatorsbecame available.Alan Turingdescribed the "human computer" as someone who is "supposed to be following fixed rules; he has no authority to deviate from them in any detail."[2]Teams of people, often women from the late nineteenth century onwards, were used to undertake long and often tedious calculations; the work was divided so that this could be done in parallel. The same calculations were frequently performed independently by separate teams to check the correctness of the results.
Since the end of the 20th century, the term "human computer" has also been applied to individuals with prodigious powers ofmental arithmetic, also known asmental calculators.
AstronomersinRenaissancetimes used that term about as often as they called themselves "mathematicians" for their principal work of calculating thepositions of planets. They often hired a "computer" to assist them. For some people, such asJohannes Kepler, assisting a scientist in computation was a temporary position until they moved on to greater advancements. Before he died in 1617,John Napiersuggested ways by which "the learned, who perchance may have plenty of pupils and computers" might construct an improvedlogarithm table.[3]: p.46
Computing became more organized when the FrenchmanAlexis Claude Clairaut(1713–1765) divided the computation to determine the time of the return ofHalley's Cometwith two colleagues,Joseph LalandeandNicole-Reine Lepaute.[4]Human computers continued plotting the future movements of astronomical objects to create celestial tables foralmanacsin the late 1760s.[5]
The computers working on theNautical Almanacfor the British Admiralty includedWilliam Wales,Israel LyonsandRichard Dunthorne.[6]The project was overseen byNevil Maskelyne.[7]Maskelyne would borrow tables from other sources as often as he could in order to reduce the number of calculations his team of computers had to make.[8]Women were generally excluded, with some exceptions such asMary Edwardswho worked from the 1780s to 1815 as one of thirty-five computers for the BritishNautical Almanacused for navigation at sea. The United States also worked on their own version of a nautical almanac in the 1840s, withMaria Mitchellbeing one of the best-known computers on the staff.[9]
Other innovations in human computing included the work done by a group of boys who worked in the Octagon Room of theRoyal Greenwich Observatoryfor Astronomer RoyalGeorge Airy.[10]Airy's computers, hired after 1835, could be as young as fifteen, and they were working on a backlog of astronomical data.[11]The way that Airy organized the Octagon Room with a manager, pre-printed computing forms, and standardized methods of calculating and checking results (similar to the way theNautical Almanaccomputers operated) would remain a standard for computing operations for the next 80 years.[12]
Women were increasingly involved in computing after 1865.[13]Private companies hired them for computing and to manage office staff.[13]
In the 1870s, the United StatesSignal Corpscreated a new way of organizing human computing to track weather patterns.[14]This built on previous work from theUS Navyand theSmithsonian meteorological project.[15]The Signal Corps used a small computing staff that processed data that had to be collected quickly and finished in "intensive two-hour shifts".[16]Each individual human computer was responsible for only part of the data.[14]
In the late nineteenth centuryEdward Charles Pickeringorganized the "Harvard Computers".[17]The first woman to approach them,Anna Winlock, askedHarvard Observatoryfor a computing job in 1875.[18]By 1880, all of the computers working at the Harvard Observatory were women.[18]The standard computer pay started at twenty-five cents an hour.[19]There would be such a huge demand to work there, that some women offered to work for the Harvard Computers for free.[20]Many of the women astronomers from this era were computers with possibly the best-known beingFlorence Cushman,Henrietta Swan Leavitt, andAnnie Jump Cannon, who worked with Pickering from 1888, 1893, and 1896 respectively. Cannon could classify stars at a rate of three per minute.[21]Mina Fleming, one of the Harvard Computers, publishedThe Draper Catalogue of Stellar Spectrain 1890.[22]The catalogue organized stars byspectral lines.[22]The catalogue continued to be expanded by the Harvard Computers and added new stars in successive volumes.[23]Elizabeth Williamswas involved in calculations in the search for a new planet,Pluto, at theLowell Observatory.
In 1893,Francis Galtoncreated the Committee for Conducting Statistical Inquiries into the Measurable Characteristics of Plants and Animals which reported to theRoyal Society.[24]The committee used advanced techniques for scientific research and supported the work of several scientists.[24]W.F. Raphael Weldon, the first scientist supported by the committee worked with his wife, Florence Tebb Weldon, who was his computer.[24]Weldon used logarithms and mathematical tables created byAugust Leopold Crelleand had no calculating machine.[25]Karl Pearson, who had a lab at theUniversity of London, felt that the work Weldon did was "hampered by the committee".[26]However, Pearson did create a mathematical formula that the committee was able to use for data correlation.[27]Pearson brought his correlation formula to his own Biometrics Laboratory.[27]Pearson had volunteer and salaried computers who were both men and women.[28]Alice Leewas one of his salaried computers who worked withhistogramsand thechi-squaredstatistics.[29]Pearson also worked withBeatriceandFrances Cave-Brown-Cave.[29]Pearson's lab, by 1906, had mastered the art ofmathematical tablemaking.[29]
Human computers were used to compile 18th and 19th century Western Europeanmathematical tables, for example those fortrigonometryandlogarithms. Although these tables were most often known by the names of the principalmathematicianinvolved in the project, such tables were often in fact the work of an army of unknown and unsung computers. Ever more accurate tables to a high degree of precision were needed for navigation and engineering. Approaches differed, but one was to break up the project into a form ofpiece workcompleted at home. The computers, often educated middle-class women whom society deemed it unseemly to engage in the professions or go out to work, would receive and send back packets of calculations by post.[30]The Royal Astronomical Society eventually gave space to a new committee, the Mathematical Tables Committee, which was the only professional organization for human computers in 1925.[31]
Human computers were used to predict the effects of building theAfsluitdijkbetween 1927 and 1932 in theZuiderzeein theNetherlands. The computer simulation was set up byHendrik Lorentz.[32]
A visionary application to meteorology can be found in the scientific work ofLewis Fry Richardsonwho, in 1922, estimated that 64,000 humans could forecast the weather for the whole globe by solving the attending differentialprimitive equationsnumerically.[33]Around 1910 he had already used human computers to calculate the stresses inside a masonry dam.[34]
It was not untilWorld War Ithat computing became a profession. "The First World War required large numbers of human computers. Computers on both sides of the war produced map grids, surveying aids, navigation tables and artillery tables. With the men at war, most of these new computers were women and many were college educated."[35]This would happen again duringWorld War II, as more men joined the fight, college educated women were left to fill their positions. One of the first female computers, Elizabeth Webb Wilson, was hired by the Army in 1918 and was a graduate ofGeorge Washington University. Wilson "patiently sought a war job that would make use of her mathematical skill. In later years, she would claim that the war spared her from the 'Washington social whirl', the rounds of society events that should have procured for her a husband"[35]and instead she was able to have a career. After the war, Wilson continued with a career in mathematics and became anactuaryand turned her focus tolife tables.
Human computers played integral roles in the World War II war effort in the United States, and because of the depletion of the male labor force due to thedraft, many computers during World War II were women, frequently with degrees in mathematics. In the 1940s, women were hired to examine nuclear and particle tracks left on photographic emulsions.[36]In theManhattan Project, human computers working with a variety of mechanical aids assisted numerical studies of the complex formulas related tonuclear fission.[37]
Human computers were involved in calculating ballistics tables during World War I.[38]Between the two world wars, computers were used in the Department of Agriculture in the United States and also atIowa State College.[39]The human computers in these places also used calculating machines and early electrical computers to aid in their work.[40]In the 1930s, The Columbia University Statistical Bureau was created byBenjamin Wood.[41]Organized computing was also established atIndiana University, theCowles Commissionand theNational Research Council.[42]
Following World War II, theNational Advisory Committee for Aeronautics(NACA) used human computers in flight research to transcribe raw data from celluloid film andoscillographpaper and then, usingslide rulesand electriccalculators, reduced the data to standard engineering units.Margot Lee Shetterly's biographical book,Hidden Figures(made into amovie of the same namein 2016), depicts African-American women who served as human computers atNASAin support of theFriendship 7, the first American crewed mission into Earth orbit.[43]NACA had begun hiring black women as computers from 1940.[44]One such computer wasDorothy Vaughanwho began her work in 1943 with theLangley Research Centeras a special hire to aid the war effort,[45]and who came to supervise theWest Area Computers, a group of African-American women who worked as computers at Langley. Human computing was, at the time, considered menial work. On November 8, 2019, theCongressional Gold Medalwas awarded "In recognition of all the women who served as computers, mathematicians, and engineers at the National Advisory Committee for Aeronautics and the National Aeronautics and Space Administration (NASA) between the 1930s and the 1970s."[46]
As electrical computers became more available, human computers, especially women, were drafted as some of the firstcomputer programmers.[47]Because the six people responsible for setting up problems on theENIAC(the first general-purpose electronic digital computer built at theUniversity of Pennsylvaniaduring World War II) were drafted from a corps of human computers, the world's first professional computer programmers were women, namely:Kay McNulty,Betty Snyder,Marlyn Wescoff,Ruth Lichterman,Betty Jean Jennings, andFran Bilas.[48]
The term "human computer" has been recently used by a group of researchers who refer to their work as "human computation".[49]In this usage, "human computer" refers to activities of humans in the context ofhuman-based computation(HBC).
This use of "human computer" is debatable for the following reason: HBC is a computational technique where a machine outsources certain parts of a task to humans to perform, which are not necessarily algorithmic. In fact, in the context of HBC most of the time humans are not provided with a sequence of exact steps to be executed to yield the desired result; HBC is agnostic about how humans solve the problem. This is why "outsourcing" is the term used in the definition above. The use of humans in the historical role of "human computers" forHBCis very rare.
|
https://en.wikipedia.org/wiki/Computer_(occupation)
|
Set partitioning in hierarchical trees(SPIHT)[1]is animagecompression algorithmthat exploits the inherent similarities across the subbands in awavelet decompositionofan image. The algorithm was developed by Brazilian engineer Amir Said with William A. Pearlman in 1996.[1]
The algorithmcodesthe most importantwavelet transformcoefficientsfirst, and transmits the bits so that an increasingly refined copy of the original image can be obtained progressively.
|
https://en.wikipedia.org/wiki/Set_partitioning_in_hierarchical_trees
|
Ibis redibis nunquam per bella peribis(alternativelyIbis redibis nunquam in bello morieris) is aLatinphrase, often used to illustrate the meaning ofsyntactic ambiguityto students of either Latin orlinguistics. Traditionally, it is attributed to theoraclesofDodona. The phrase is thought to have been uttered by a general consulting the oracle about his fate in an upcoming battle. The sentence is crafted in a way that, withoutpunctuation, it can be interpreted in two significantly different ways.[1][2]
Meaning "you will go, you will return, never in war will you perish". The other possibility is the exact opposite in meaning:
That is: "you will go, you will return never, in war you will perish".
A Greek parallel expression with the same meaning is also current: ἤξεις ἀφήξεις, οὐ θνήξεις ἐν πολέμῳ("Íxeis afíxeies, ou thníxeis en polémo"). While Greek authorities have in the past assumed this was the originalDodonaoracle (e.g. first edition of Babiniotis dictionary), no ancient instance of the expression is attested, and a future form corresponding to the rhyming θνήξεις,thníxeis(instead of the classical θανεῖ,thaneí), is first attested from the reign of Nero (Greek Anthology9.354). The Greek expression is therefore probably a modern back-translation from the Latin.[3]
To say that something is anibis redibis, usually in the context of legal documents, is to say that its wording is (either deliberately or accidentally) confusing or ambiguous.
|
https://en.wikipedia.org/wiki/Ibis_redibis_nunquam_per_bella_peribis
|
Instatisticsandmachine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called aMarkov blanket. If a Markov blanket is minimal, meaning that it cannot drop any variable without losing information, it is called aMarkov boundary. Identifying a Markov blanket or a Markov boundary helps to extract useful features. The terms of Markov blanket and Markov boundary were coined byJudea Pearlin 1988.[1]A Markov blanket can be constituted by a set ofMarkov chains.
A Markov blanket of a random variableY{\displaystyle Y}in a random variable setS={X1,…,Xn}{\displaystyle {\mathcal {S}}=\{X_{1},\ldots ,X_{n}\}}is any subsetS1{\displaystyle {\mathcal {S}}_{1}}ofS{\displaystyle {\mathcal {S}}}, conditioned on which other variables are independent withY{\displaystyle Y}:
Y⊥⊥S∖S1∣S1.{\displaystyle Y\perp \!\!\!\perp {\mathcal {S}}\backslash {\mathcal {S}}_{1}\mid {\mathcal {S}}_{1}.}
It means thatS1{\displaystyle {\mathcal {S}}_{1}}contains at least all the information one needs to inferY{\displaystyle Y}, where the variables inS∖S1{\displaystyle {\mathcal {S}}\backslash {\mathcal {S}}_{1}}are redundant.
In general, a given Markov blanket is not unique. Any set inS{\displaystyle {\mathcal {S}}}that contains a Markov blanket is also a Markov blanket itself. Specifically,S{\displaystyle {\mathcal {S}}}is a Markov blanket ofY{\displaystyle Y}inS{\displaystyle {\mathcal {S}}}.
AMarkov boundaryofY{\displaystyle Y}inS{\displaystyle {\mathcal {S}}}is a subsetS2{\displaystyle {\mathcal {S}}_{2}}ofS{\displaystyle {\mathcal {S}}}, such thatS2{\displaystyle {\mathcal {S}}_{2}}itself is a Markov blanket ofY{\displaystyle Y}, but any proper subset ofS2{\displaystyle {\mathcal {S}}_{2}}is not a Markov blanket ofY{\displaystyle Y}. In other words, a Markov boundary is a minimal Markov blanket.
The Markov boundary of anodeA{\displaystyle A}in aBayesian networkis the set of nodes composed ofA{\displaystyle A}'s parents,A{\displaystyle A}'s children, andA{\displaystyle A}'s children's other parents. In aMarkov random field, the Markov boundary for a node is the set of its neighboring nodes. In adependency network, the Markov boundary for a node is the set of its parents.
The Markov boundary always exists. Under some mild conditions, the Markov boundary is unique. However, for most practical and theoretical scenarios multiple Markov boundaries may provide alternative solutions.[2]When there are multiple Markov boundaries, quantities measuring causal effect could fail.[3]
|
https://en.wikipedia.org/wiki/Markov_blanket
|
TheLogistics Vehicle System(LVS), nicknamed by U.S. Marines as "Dragon Wagon" which is a reference to the famousM25 tank transporterwhich had the nickname of the Dragon wagon. The LVS is a modular assortment ofeight-wheel driveall-terrain vehicleunit combinations used by theUnited States Marine Corps.
The LVS was fielded in 1985 as the Marine Corps heavy tactical vehicle system.[1]It was designed and manufactured by theOshkosh Corporation. TheUnited States Armydoes not use the LVS, it uses theHeavy Expanded Mobility Tactical Truck (HEMTT). The key differences between the two is the LVS's ability to interchange Front Power Units with Rear Body Units. The LVS also steers through both standard wheel pivoting (as on a typical automobile) andhydraulicyaw steering (by articulating the Front Power Unit against the Rear Body Unit). This enabled the LVS to meet the turning radius requirements of the U.S. Marines. LVS is rated to haul up to 22.5 tonnes (50,000 lb) on highways.[1]
TheOshkosh Logistic Vehicle System Replacement (LVSR)is the replacement for the LVS and was first fielded in 2009.[1]
The LVS is composed of a Front Power Unit (FPU) coupled to a Rear Body Unit (RBU). The FPU can be driven on its own. When describing a truck it is remarked by the combination of both units, for example, an MK48 FPU attached to an MK18 RBU is called a "48/18". For MK16's, which towM870 semi-trailers, the type of trailer is added as well, i.e. "48/16/870A2".
|
https://en.wikipedia.org/wiki/Logistics_Vehicle_System
|
Thediffusion equationis aparabolic partial differential equation. In physics, it describes the macroscopic behavior of many micro-particles inBrownian motion, resulting from the random movements and collisions of the particles (seeFick's laws of diffusion). In mathematics, it is related toMarkov processes, such asrandom walks, and applied in many other fields, such asmaterials science,information theory, andbiophysics. The diffusion equation is a special case of theconvection–diffusion equationwhen bulk velocity is zero. It is equivalent to theheat equationunder some circumstances.
The equation is usually written as:∂ϕ(r,t)∂t=∇⋅[D(ϕ,r)∇ϕ(r,t)],{\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\nabla \cdot {\big [}D(\phi ,\mathbf {r} )\ \nabla \phi (\mathbf {r} ,t){\big ]},}whereϕ(r,t)is thedensityof the diffusing material at locationrand timetandD(ϕ,r)is the collectivediffusion coefficientfor densityϕat locationr; and∇represents the vectordifferential operatordel. If the diffusion coefficient depends on the density then the equation is nonlinear, otherwise it is linear.
The equation above applies when the diffusion coefficient isisotropic; in the case of anisotropic diffusion,Dis a symmetricpositive definite matrix, and the equation is written (for three dimensional diffusion) as:∂ϕ(r,t)∂t=∑i=13∑j=13∂∂xi[Dij(ϕ,r)∂ϕ(r,t)∂xj]{\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\sum _{i=1}^{3}\sum _{j=1}^{3}{\frac {\partial }{\partial x_{i}}}\left[D_{ij}(\phi ,\mathbf {r} ){\frac {\partial \phi (\mathbf {r} ,t)}{\partial x_{j}}}\right]}The diffusion equation has numerous analytic solutions.[1]
IfDis constant, then the equation reduces to the followinglinear differential equation:
which is identical to theheat equation.
Theparticle diffusion equationwas originally derived byAdolf Fickin 1855.[2]
The diffusion equation can be trivially derived from thecontinuity equation, which states that a change in density in any part of the system is due to inflow and outflow of material into and out of that part of the system. Effectively, no material is created or destroyed:∂ϕ∂t+∇⋅j=0,{\displaystyle {\frac {\partial \phi }{\partial t}}+\nabla \cdot \mathbf {j} =0,}wherejis the flux of the diffusing material. The diffusion equation can be obtained easily from this when combined with the phenomenologicalFick's first law, which states that the flux of the diffusing material in any part of the system is proportional to the local density gradient:j=−D(ϕ,r)∇ϕ(r,t).{\displaystyle \mathbf {j} =-D(\phi ,\mathbf {r} )\,\nabla \phi (\mathbf {r} ,t).}
If drift must be taken into account, theFokker–Planck equationprovides an appropriate generalization.
The diffusion equation is continuous in both space and time. One may discretize space, time, or both space and time, which arise in application. Discretizing time alone just corresponds to taking time slices of the continuous system, and no new phenomena arise.
In discretizing space alone, theGreen's functionbecomes thediscrete Gaussian kernel, rather than the continuousGaussian kernel. In discretizing both time and space, one obtains therandom walk.
Theproduct ruleis used to rewrite the anisotropic tensor diffusion equation, in standard discretization schemes, because direct discretization of the diffusion equation with only first order spatial central differences leads to checkerboard artifacts. The rewritten diffusion equation used in image filtering:∂ϕ(r,t)∂t=∇⋅[D(ϕ,r)]∇ϕ(r,t)+tr[D(ϕ,r)(∇∇Tϕ(r,t))]{\displaystyle {\frac {\partial \phi (\mathbf {r} ,t)}{\partial t}}=\nabla \cdot \left[D(\phi ,\mathbf {r} )\right]\nabla \phi (\mathbf {r} ,t)+{\rm {tr}}{\Big [}D(\phi ,\mathbf {r} ){\big (}\nabla \nabla ^{\text{T}}\phi (\mathbf {r} ,t){\big )}{\Big ]}}where "tr" denotes thetraceof the 2nd ranktensor, and superscript "T" denotestranspose, in which in image filteringD(ϕ,r) are symmetric matrices constructed from theeigenvectorsof the imagestructure tensors. The spatial derivatives can then be approximated by two first order and a second order centralfinite differences. The resulting diffusion algorithm can be written as an imageconvolutionwith a varying kernel (stencil) of size 3 × 3 in 2D and 3 × 3 × 3 in 3D.
|
https://en.wikipedia.org/wiki/Diffusion_equation
|
Innavigation,dead reckoningis the process of calculating the current position of a moving object by using a previously determined position, orfix, and incorporating estimates of speed, heading (or direction or course), and elapsed time. The corresponding term in biology, to describe the processes by which animals update their estimates of position or heading, ispath integration.
Advances innavigational aidsthat give accurate information on position, in particularsatellite navigationusing theGlobal Positioning System, have made simple dead reckoning by humans obsolete for most purposes. However,inertial navigation systems, which provide very accurate directional information, use dead reckoning and are very widely applied.
Contrary to myth, the term "dead reckoning" was not originally used to abbreviate "deduced reckoning", nor is it a misspelling of the term "ded reckoning". The use of "ded" or "deduced reckoning" is not known to have appeared earlier than 1931, much later in history than "dead reckoning", which appeared as early as 1613 according to theOxford English Dictionary. The original intention of "dead" in the term is generally assumed to mean using a stationary object that is "dead in the water" as a basis for calculations. Additionally, at the time the first appearance of "dead reckoning", "ded" was considered a common spelling of "dead". This potentially led to later confusion of the origin of the term.[1]
By analogy with their navigational use, the wordsdead reckoningare also used to mean the process of estimating the value of any variable quantity by using an earlier value and adding whatever changes have occurred in the meantime. Often, this usage implies that the changes are not known accurately. The earlier value and the changes may be measured or calculated quantities.[citation needed]
While dead reckoning can give the best available information on the present position with little math or analysis, it is subject to significant errors of approximation. For precise positional information, both speed and direction must be accurately known at all times during travel. Most notably, dead reckoning does not account for directional drift during travel through a fluid medium. These errors tend to compound themselves over greater distances, making dead reckoning a difficult method of navigation for longer journeys.
For example, if displacement is measured by the number of rotations of a wheel, any discrepancy between the actual and assumed traveled distance per rotation, due perhaps to slippage or surface irregularities, will be a source of error. As each estimate of position is relative to the previous one, errors arecumulative, or compounding, over time.
The accuracy of dead reckoning can be increased significantly by using other, more reliable methods to get a new fix part way through the journey. For example, if one was navigating on land in poor visibility, then dead reckoning could be used to get close enough to the known position of a landmark to be able to see it, before walking to the landmark itself—giving a precisely known starting point—and then setting off again.
Localizinga staticsensor nodeis not a difficult task because attaching aGlobal Positioning System(GPS) device suffices the need of localization. But amobile sensor node, which continuously changes its geographical location with time is difficult to localize. Mostly mobile sensor nodes within some particular domain for data collection can be used,i.e, sensor node attached to an animal within a grazing field or attached to a soldier on a battlefield. Within these scenarios a GPS device for each sensor node cannot be afforded. Some of the reasons for this include cost, size and battery drainage of constrained sensor nodes.
To overcome this problem a limited number of reference nodes (with GPS) within a field is employed. These nodes continuously broadcast their locations and other nodes in proximity receive these locations and calculate their position using some mathematical technique liketrilateration. For localization, at least three known reference locations are necessary to localize. Several localization algorithms based onSequential Monte Carlo(SMC) method have been proposed in literature.[2][3]Sometimes a node at some places receives only two known locations and hence it becomes impossible to localize. To overcome this problem, dead reckoning technique is used. With this technique a sensor node uses its previous calculated location for localization at later time intervals.[4]For example, at time instant 1 if node A calculates its position asloca_1with the help of three known reference locations; then at time instant 2 it usesloca_1along with two other reference locations received from other two reference nodes. This not only localizes a node in less time but also localizes in positions where it is difficult to get three reference locations.[5]
In studies of animal navigation, dead reckoning is more commonly (though not exclusively) known aspath integration. Animals use it to estimate their current location based on their movements from their last known location. Animals such as ants, rodents, and geese have been shown to track their locations continuously relative to a starting point and to return to it, an important skill for foragers with a fixed home.[6][7]
In marine navigation a "dead" reckoning plot generally does not take into account the effect ofcurrentsorwind. Aboard ship a dead reckoning plot is considered important in evaluating position information and planning the movement of the vessel.[8]
Dead reckoning begins with a known position, orfix, which is then advanced, mathematically or directly on the chart, by means of recorded heading, speed, and time. Speed can be determined by many methods. Before modern instrumentation, it was determined aboard ship using achip log. More modern methods includepit logreferencing engine speed (e.g. inrpm) against a table of total displacement (for ships) or referencing one's indicated airspeed fed by the pressure from apitot tube. This measurement is converted to anequivalent airspeedbased upon known atmospheric conditions and measured errors in the indicated airspeed system. A naval vessel uses a device called apit sword(rodmeter), which uses two sensors on a metal rod to measure the electromagnetic variance caused by the ship moving through water. This change is then converted to ship's speed. Distance is determined by multiplying the speed and the time. This initial position can then be adjusted resulting in an estimated position by taking into account the current (known asset and driftin marine navigation). If there is no positional information available, a new dead reckoning plot may start from an estimated position. In this case subsequent dead reckoning positions will have taken into account estimated set and drift.
Dead reckoning positions are calculated at predetermined intervals, and are maintained between fixes. The duration of the interval varies. Factors including one's speed made good and the nature of heading and other course changes, and the navigator's judgment determine when dead reckoning positions are calculated.
Before the 18th-century development of themarine chronometerbyJohn Harrisonand thelunar distance method, dead reckoning was the primary method of determininglongitudeavailable to mariners such asChristopher ColumbusandJohn Caboton their trans-Atlantic voyages. Tools such as thetraverse boardwere developed to enable even illiterate crew members to collect the data needed for dead reckoning.Polynesian navigation, however, uses differentwayfindingtechniques.
On 14 June, 1919,John Alcock and Arthur Browntook off from Lester's Field inSt. John's,Newfoundlandin aVickers Vimy. They navigated across theAtlantic Oceanby dead reckoning and landed inCounty Galway,Irelandat 8:40 a.m. on 15 June completing the first non-stoptransatlantic flight.
On 21 May 1927Charles Lindberghlanded inParis, Franceafter a successful non-stop flight from the United States in the single-enginedSpirit of St. Louis. As the aircraft was equipped with very basic instruments, Lindbergh used dead reckoning to navigate.
Dead reckoning in the air is similar to dead reckoning on the sea, but slightly more complicated. The density of the air the aircraft moves through affects its performance as well as winds, weight, and power settings.
The basic formula for DR is Distance = Speed x Time. An aircraft flying at 250 knots airspeed for 2 hours has flown 500 nautical miles through the air. Thewind triangleis used to calculate the effects of wind on heading and airspeed to obtain a magnetic heading to steer and the speed over the ground (groundspeed). Printed tables, formulae, or anE6Bflight computer are used to calculate the effects of air density on aircraft rate of climb, rate of fuel burn, and airspeed.[9]
A course line is drawn on the aeronautical chart along with estimated positions at fixed intervals (say every half hour). Visual observations of ground features are used to obtain fixes. By comparing the fix and the estimated position corrections are made to the aircraft's heading and groundspeed.
Dead reckoning is on the curriculum for VFR (visual flight rules – or basic level) pilots worldwide.[10]It is taught regardless of whether the aircraft has navigation aids such as GPS,ADFandVORand is anICAORequirement. Many flying training schools will prevent a student from using electronic aids until they have mastered dead reckoning.
Inertial navigation systems(INSes), which are nearly universal on more advanced aircraft, use dead reckoning internally. The INS provides reliable navigation capability under virtually any conditions, without the need for external navigation references, although it is still prone to slight errors.
Dead reckoning is today implemented in some[weasel words]high-end[specify]automotive navigation systemsin order to overcome the limitations of GPS/GNSStechnology alone. Satellite microwave signals are unavailable inparking garagesand tunnels, and often severely degraded inurban canyonsand near trees due to blocked lines of sight to the satellites ormultipath propagation. In a dead-reckoning navigation system, the car is equipped with sensors that know the wheel circumference and record wheel rotations and steering direction. These sensors are often already present in cars for other purposes (anti-lock braking system,electronic stability control) and can be read by the navigation system from thecontroller-area networkbus. The navigation system then uses aKalman filterto integrate the always-available sensor data with the accurate but occasionally unavailable position information from the satellite data into a combined position fix.
Dead reckoning is utilized in some robotic applications.[11]It is usually used to reduce the need for sensing technology, such asultrasonic sensors, GPS, or placement of somelinearandrotary encoders, in anautonomous robot, thus greatly reducing cost and complexity at the expense of performance and repeatability. The proper utilization of dead reckoning in this sense would be to supply a known percentage of electrical power orhydraulicpressure to the robot's drive motors over a given amount of time from a general starting point. Dead reckoning is not totally accurate, which can lead to errors in distance estimates ranging from a few millimeters (inCNC machining) to kilometers (inUAVs), based upon the duration of the run, the speed of the robot, the length of the run, and several other factors.[citation needed]
With the increased sensor offering insmartphones, built-in accelerometers can be used as apedometerand built-inmagnetometeras a compass heading provider.Pedestrian dead reckoning(PDR) can be used to supplement other navigation methods in a similar way to automotive navigation, or to extend navigation into areas where other navigation systems are unavailable.[12]
In a simple implementation, the user holds their phone in front of them and each step causes position to move forward a fixed distance in the direction measured by the compass. Accuracy is limited by the sensor precision, magnetic disturbances inside structures, and unknown variables such as carrying position and stride length. Another challenge is differentiating walking from running, and recognizing movements like bicycling, climbing stairs, or riding an elevator.
Before phone-based systems existed, many custom PDR systems existed. While apedometercan only be used to measure linear distance traveled, PDR systems have an embedded magnetometer for heading measurement. Custom PDR systems can take many forms including special boots, belts, and watches, where the variability of carrying position has been minimized to better utilize magnetometer heading. True dead reckoning is fairly complicated, as it is not only important to minimize basic drift, but also to handle different carrying scenarios and movements, as well as hardware differences across phone models.[13]
The south-pointing chariot was an ancient Chinese device consisting of a two-wheeledhorse-drawn vehiclewhich carried a pointer that was intended always to aim to the south, no matter how the chariot turned. The chariot pre-dated the navigational use of themagnetic compass, and could notdetectthe direction that was south. Instead it used a kind ofdirectional dead reckoning: at the start of a journey, the pointer was aimed southward by hand, using local knowledge or astronomical observations e.g. of thePole Star. Then, as it traveled, a mechanism possibly containingdifferentialgears used the different rotational speeds of the two wheels to turn the pointer relative to the body of the chariot by the angle of turns made (subject to available mechanical accuracy), keeping the pointer aiming in its original direction, to the south. Errors, as always with dead reckoning, would accumulate as distance traveled increased.
Networked games and simulation tools routinely use dead reckoning to predict where an actor should be right now, using its last known kinematic state (position, velocity, acceleration, orientation, and angular velocity).[14]This is primarily needed because it is impractical to send network updates at the rate that most games run, 60 Hz. The basic solution starts by projecting into the future using linear physics:[15]
Pt=P0+V0T+12A0T2{\displaystyle P_{t}=P_{0}+V_{0}T+{\frac {1}{2}}A_{0}T^{2}}
This formula is used to move the object until a new update is received over the network. At that point, the problem is that there are now two kinematic states: the currently estimated position and the just received, actual position. Resolving these two states in a believable way can be quite complex. One approach is to create a curve (e.g. cubicBézier splines,centripetal Catmull–Rom splines, andHermite curves)[16]between the two states while still projecting into the future. Another technique is to use projective velocity blending, which is the blending of two projections (last known and current) where the current projection uses a blending between the last known and current velocity over a set time.[14]
The first equation calculates a blended velocityVb{\displaystyle V_{b}}given the client-side velocity at the time of the last server updateV0{\displaystyle V_{0}}and the last known server-side velocityV´0{\displaystyle {\acute {V}}_{0}}. This essentially blends from the client-side velocity towards the server-side velocity for a smooth transition. Note thatT^{\displaystyle {\hat {T}}}should go from zero (at the time of the server update) to one (at the time at which the next update should be arriving). A late server update is unproblematic as long asT^{\displaystyle {\hat {T}}}remains at one.
Next, two positions are calculated: firstly, the blended velocityVb{\displaystyle V_{b}}and the last known server-side accelerationA´0{\displaystyle {\acute {A}}_{0}}are used to calculatePt{\displaystyle P_{t}}. This is a position which is projected from the client-side start positionP0{\displaystyle P_{0}}based onTt{\displaystyle T_{t}}, the time which has passed since the last server update. Secondly, the same equation is used with the last known server-side parameters to calculate the position projected from the last known server-side positionP´0{\displaystyle {\acute {P}}_{0}}and velocityV´0{\displaystyle {\acute {V}}_{0}}, resulting inP´t{\displaystyle {\acute {P}}_{t}}.
Finally, the new position to display on the clientPos{\displaystyle Pos}is the result of interpolating from the projected position based on client informationPt{\displaystyle P_{t}}towards the projected position based on the last known server informationP´t{\displaystyle {\acute {P}}_{t}}. The resulting movement smoothly resolves the discrepancy between client-side and server-side information, even if this server-side information arrives infrequently or inconsistently. It is also free of oscillations which spline-based interpolation may suffer from.
In computer science, dead-reckoning refers to navigating anarray data structureusing indexes. Since every array element has the same size, it is possible todirectly accessone array element by knowing any position in the array.[17]
Given the following array:
knowing the memory address where the array starts, it is easy to compute the memory address of D:
addressD=addressstart of array+(sizearray element∗arrayIndexD){\displaystyle {\text{address}}_{\text{D}}={\text{address}}_{\text{start of array}}+({\text{size}}_{\text{array element}}*{\text{arrayIndex}}_{\text{D}})}
Likewise, knowing D's memory address, it is easy to compute the memory address of B:
addressB=addressD−(sizearray element∗(arrayIndexD−arrayIndexB)){\displaystyle {\text{address}}_{\text{B}}={\text{address}}_{\text{D}}-({\text{size}}_{\text{array element}}*({\text{arrayIndex}}_{\text{D}}-{\text{arrayIndex}}_{\text{B}}))}
This property is particularly important forperformancewhen used in conjunction with arrays ofstructuresbecause data can be directly accessed, without going through apointer dereference.
Transport portal
|
https://en.wikipedia.org/wiki/Dead_reckoning
|
COBIT(Control Objectives for Information and Related Technologies) is a framework created byISACAforinformation technology (IT) managementandIT governance.[1]
The framework is business focused and defines a set of generic processes for the management of IT, with each process defined together with process inputs and outputs, key process-activities, process objectives, performance measures and an elementarymaturity model.[1]
Business and IT goals are linked and measured to create responsibilities of business and IT teams.
Five processes are identified: Evaluate, Direct and Monitor (EDM); Align, Plan and Organize (APO); Build, Acquire and Implement (BAI); Deliver, Service and Support (DSS); and Monitor, Evaluate and Assess (MEA).[2]
The COBIT framework ties in withCOSO,ITIL,[3]BiSL,ISO 27000,CMMI,TOGAFandPMBOK.[1]
The framework helps companies follow law, be more agile and earn more.[4]
Below are COBIT components:
The standard meets all the needs of the practice, while maintaining independence from specific manufacturers, technologies and platforms. When developing the standard, it was possible to use it both for auditing a company'sIT systemand for designing an IT system. In the first case, COBIT allows you to determine the degree of conformity of the system under study to the best examples, and in the second, to design a system that is almost ideal in its characteristics.
COBIT was initially "Control Objectives for Information and Related Technologies," though before the release of the framework people talked of "CobiT" as "Control Objectives for IT"[5]or "Control Objectives for Information and Related Technology."[6]
ISACA first released COBIT in 1996, originally as a set of control objectives[clarification needed]to help the financial audit community better maneuver in IT-related environments.[1][7]Seeing value in expanding the framework beyond just the auditing realm, ISACA released a broader version 2 in 1998 and expanded it even further by adding management guidelines in 2000's version 3. The development of both theAS 8015:Australian Standard for Corporate Governance of Information and Communication Technologyin January 2005[8]and the more international draft standard ISO/IEC DIS 29382 (which soon after becameISO/IEC 38500) in January 2007[9]increased awareness of the need for more information and communication technology (ICT) governance components. ISACA inevitably added related components/frameworks with versions 4 and 4.1 in 2005 and 2007 respectively, "addressing the IT-related business processes and responsibilities in value creation (Val IT) andrisk management(Risk IT)."[1][7]
COBIT 5 (2012) is based on COBIT 4.1, Val IT 2.0 and Risk IT frameworks, and draws on ISACA'sIT Assurance Framework(ITAF) and theBusiness Model for Information Security(BMIS).[10][11]
ISACA currently offers certification tracks on both COBIT 2019 (COBIT Foundations, COBIT Design & Implementation, and Implementing the NIST Cybersecurity Framework Using COBIT 2019)[12]as well as certification in the previous version (COBIT 5).[13][14]
|
https://en.wikipedia.org/wiki/COBIT
|
Inabstract algebra, anelementaof aringRis called aleft zero divisorif there exists a nonzeroxinRsuch thatax= 0,[1]or equivalently if themapfromRtoRthat sendsxtoaxis notinjective.[a]Similarly, an elementaof a ring is called aright zero divisorif there exists a nonzeroyinRsuch thatya= 0. This is a partial case ofdivisibility in rings. An element that is a left or a right zero divisor is simply called azero divisor.[2]An elementathat is both a left and a right zero divisor is called atwo-sided zero divisor(the nonzeroxsuch thatax= 0may be different from the nonzeroysuch thatya= 0). If the ring iscommutative, then the left and right zero divisors are the same.
An element of a ring that is not a left zero divisor (respectively, not a right zero divisor) is calledleft regularorleft cancellable(respectively,right regularorright cancellable).
An element of a ring that is left and right cancellable, and is hence not a zero divisor, is calledregularorcancellable,[3]or anon-zero-divisor. A zero divisor that is nonzero is called anonzero zero divisoror anontrivial zero divisor. A non-zeroring with no nontrivial zero divisors is called adomain.
(1122)(11−1−1)=(−21−21)(1122)=(0000),{\displaystyle {\begin{pmatrix}1&1\\2&2\end{pmatrix}}{\begin{pmatrix}1&1\\-1&-1\end{pmatrix}}={\begin{pmatrix}-2&1\\-2&1\end{pmatrix}}{\begin{pmatrix}1&1\\2&2\end{pmatrix}}={\begin{pmatrix}0&0\\0&0\end{pmatrix}},}(1000)(0001)=(0001)(1000)=(0000).{\displaystyle {\begin{pmatrix}1&0\\0&0\end{pmatrix}}{\begin{pmatrix}0&0\\0&1\end{pmatrix}}={\begin{pmatrix}0&0\\0&1\end{pmatrix}}{\begin{pmatrix}1&0\\0&0\end{pmatrix}}={\begin{pmatrix}0&0\\0&0\end{pmatrix}}.}
There is no need for a separate convention for the casea= 0, because the definition applies also in this case:
Some references include or exclude0as a zero divisor inallrings by convention, but they then suffer from having to introduce exceptions in statements such as the following:
LetRbe a commutative ring, letMbe anR-module, and letabe an element ofR. One says thataisM-regularif the "multiplication bya" mapM→aM{\displaystyle M\,{\stackrel {a}{\to }}\,M}is injective, and thatais azero divisor onMotherwise.[4]The set ofM-regular elements is amultiplicative setinR.[4]
Specializing the definitions of "M-regular" and "zero divisor onM" to the caseM=Rrecovers the definitions of "regular" and "zero divisor" given earlier in this article.
|
https://en.wikipedia.org/wiki/Zero_divisor
|
Tensor Processing Unit(TPU) is anAI acceleratorapplication-specific integrated circuit(ASIC) developed byGoogleforneural networkmachine learning, using Google's ownTensorFlowsoftware.[2]Google began using TPUs internally in 2015, and in 2018 made them available forthird-partyuse, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
Compared to agraphics processing unit, TPUs are designed for a high volume of lowprecisioncomputation (e.g. as little as8-bitprecision)[3]with more input/output operations perjoule, without hardware for rasterisation/texture mapping.[4]The TPUASICsare mounted in a heatsink assembly, which can fit in a hard drive slot within a data centerrack, according toNorman Jouppi.[5]
Different types of processors are suited for different types of machine learning models. TPUs are well suited forCNNs, while GPUs have benefits for some fully-connected neural networks, and CPUs can have advantages forRNNs.[6]
According to Jonathan Ross, one of the original TPU engineers,[1]and later the founder ofGroq, three separate groups at Google were developing AI accelerators, with the TPU being the design that was ultimately selected. He was not aware ofsystolic arraysat the time and upon learning the term thought "Oh, that's called a systolic array? It just seemed to make sense."[7]
The tensor processing unit was announced in May 2016 atGoogle I/O, when the company said that the TPU had already been used insidetheir data centersfor over a year.[5][4]Google's 2017 paper describing its creation cites previous systolic matrix multipliers of similar architecture built in the 1990s.[8]The chip has been specifically designed for Google'sTensorFlowframework, a symbolic math library which is used formachine learningapplications such asneural networks.[9]However, as of 2017 Google still usedCPUsandGPUsfor other types ofmachine learning.[5]OtherAI acceleratordesigns are appearing from other vendors also and are aimed atembeddedandroboticsmarkets.
Google's TPUs are proprietary. Some models are commercially available, and on February 12, 2018,The New York Timesreported that Google "would allow other companies to buy access to those chips through its cloud-computing service."[10]Google has said that they were used in theAlphaGo versus Lee Sedolseries of human-versus-machineGogames,[4]as well as in theAlphaZerosystem, which producedChess,Shogiand Go playing programs from the game rules alone and went on to beat the leading programs in those games.[11]Google has also used TPUs forGoogle Street Viewtext processing and was able to find all the text in the Street View database in less than five days. InGoogle Photos, an individual TPU can process over 100 million photos a day.[5]It is also used inRankBrainwhich Google uses to provide search results.[12]
Google provides third parties access to TPUs through itsCloud TPUservice as part of theGoogle Cloud Platform[13]and through itsnotebook-basedservicesKaggleandColaboratory.[14][15]
393 (int8)
918 (int8)
1836 (int8)
The first-generation TPU is an8-bitmatrix multiplicationengine, driven withCISC instructionsby the host processor across aPCIe 3.0bus. It is manufactured on a 28nmprocess with a die size ≤ 331mm2. Theclock speedis 700MHzand it has athermal design powerof 28–40W. It has 28MiBof on chip memory, and 4MiBof32-bitaccumulatorstaking the results of a 256×256systolic arrayof 8-bitmultipliers.[8]Within the TPU package is 8GiBofdual-channel2133 MHzDDR3 SDRAMoffering 34 GB/s of bandwidth.[18]Instructions transfer data to or from the host, perform matrix multiplications orconvolutions, and applyactivation functions.[8]
The second-generation TPU was announced in May 2017.[27]Google stated the first-generation TPU design was limited bymemory bandwidthand using 16GBofHigh Bandwidth Memoryin the second-generation design increased bandwidth to 600 GB/s and performance to 45 teraFLOPS.[18]The TPUs are then arranged into four-chip modules with a performance of 180 teraFLOPS.[27]Then 64 of these modules are assembled into 256-chip pods with 11.5 petaFLOPS of performance.[27]Notably, while the first-generation TPUs were limited to integers, the second-generation TPUs can also calculate infloating point, introducing thebfloat16format invented byGoogle Brain. This makes the second-generation TPUs useful for both training and inference of machine learning models. Google has stated these second-generation TPUs will be available on theGoogle Compute Enginefor use in TensorFlow applications.[28]
The third-generation TPU was announced on May 8, 2018.[29]Google announced that processors themselves are twice as powerful as the second-generation TPUs, and would be deployed in pods with four times as many chips as the preceding generation.[30][31]This results in an 8-fold increase in performance per pod (with up to 1,024 chips per pod) compared to the second-generation TPU deployment.
On May 18, 2021, Google CEO Sundar Pichai spoke about TPU v4 Tensor Processing Units during his keynote at the Google I/O virtual conference. TPU v4 improved performance by more than 2x over TPU v3 chips. Pichai said "A single v4 pod contains 4,096 v4 chips, and each pod has 10x the interconnect bandwidth per chip at scale, compared to any other networking technology.”[32]An April 2023 paper by Google claims TPU v4 is 5-87% faster than an NvidiaA100at machine learningbenchmarks.[33]
There is also an "inference" version, called v4i,[34]that does not requireliquid cooling.[35]
In 2021, Google revealed the physical layout of TPU v5 is being designed with the assistance of a novel application ofdeep reinforcement learning.[36]Google claims TPU v5 is nearly twice as fast as TPU v4,[37]and based on that and the relative performance of TPU v4 over A100, some speculate TPU v5 as being as fast as or faster than anH100.[38]
Similar to the v4i being a lighter-weight version of the v4, the fifth generation has a "cost-efficient"[39]version called v5e.[21]In December 2023, Google announced TPU v5p which is claimed to be competitive with the H100.[40]
In May 2024, at theGoogle I/Oconference, Google announced TPU v6, which became available in preview in October 2024.[41]Google claimed a 4.7 times performance increase relative to TPU v5e,[42]via larger matrix multiplication units and an increased clock speed. High bandwidth memory (HBM) capacity and bandwidth have also doubled. A pod can contain up to 256 Trillium units.[43]
In April 2025, at Google Cloud Next conference, Google unveiled TPU v7. This new chip, called Ironwood, will come in two configurations: a 256-chip cluster and a 9,216-chip cluster. Ironwood will have a peak computational performance rate of 4,614 TFLOP/s.[44]
In July 2018, Google announced the Edge TPU. The Edge TPU is Google's purpose-builtASICchip designed to run machine learning (ML) models foredge computing, meaning it is much smaller and consumes far less power compared to the TPUs hosted in Google datacenters (also known as Cloud TPUs[45]). In January 2019, Google made the Edge TPU available to developers with a line of products under theCoralbrand. The Edge TPU is capable of 4 trillion operations per second with 2 W of electrical power.[46]
The product offerings include asingle-board computer(SBC), asystem on module(SoM), aUSBaccessory, a miniPCI-ecard, and anM.2card. TheSBCCoral Dev Board and Coral SoM both run Mendel Linux OS – a derivative ofDebian.[47][48]The USB, PCI-e, and M.2 products function as add-ons to existing computer systems, and support Debian-based Linux systems on x86-64 and ARM64 hosts (includingRaspberry Pi).
The machine learning runtime used to execute models on the Edge TPU is based onTensorFlow Lite.[49]The Edge TPU is only capable of accelerating forward-pass operations, which means it's primarily useful for performing inferences (although it is possible to perform lightweight transfer learning on the Edge TPU[50]). The Edge TPU also only supports 8-bit math, meaning that for a network to be compatible with the Edge TPU, it needs to either be trained using the TensorFlow quantization-aware training technique, or since late 2019 it's also possible to use post-training quantization.
On November 12, 2019,Asusannounced a pair ofsingle-board computer (SBCs)featuring the Edge TPU. TheAsus Tinker Edge T and Tinker Edge R Boarddesigned forIoTandedgeAI. The SBCs officially supportAndroidandDebianoperating systems.[51][52]ASUS has also demonstrated a mini PC called Asus PN60T featuring the Edge TPU.[53]
On January 2, 2020, Google announced the Coral Accelerator Module and Coral Dev Board Mini, to be demonstrated atCES 2020later the same month. The Coral Accelerator Module is amulti-chip modulefeaturing the Edge TPU, PCIe and USB interfaces for easier integration. The Coral Dev Board Mini is a smallerSBCfeaturing the Coral Accelerator Module andMediaTek 8167s SoC.[54][55]
On October 15, 2019, Google announced thePixel 4smartphone, which contains an Edge TPU called thePixel Neural Core. Google describe it as "customized to meet the requirements of key camera features in Pixel 4", using a neural network search that sacrifices some accuracy in favor of minimizing latency and power use.[56]
Google followed the Pixel Neural Core by integrating an Edge TPU into a customsystem-on-chipnamedGoogle Tensor, which was released in 2021 with thePixel 6line of smartphones.[57]The Google Tensor SoC demonstrated "extremely large performance advantages over the competition" in machine learning-focused benchmarks; although instantaneous power consumption also was relatively high, the improved performance meant less energy was consumed due to shorter periods requiring peak performance.[58]
In 2019, Singular Computing, founded in 2009 by Joseph Bates, avisiting professoratMIT,[59]filed suit against Google allegingpatent infringementin TPU chips.[60]By 2020, Google had successfully lowered the number of claims the court would consider to just two: claim 53 ofUS 8407273filed in 2012 and claim 7 ofUS 9218156filed in 2013, both of which claim adynamic rangeof 10−6to 106for floating point numbers, which the standardfloat16cannot do (without resorting tosubnormal numbers) as it only has five bits for the exponent. In a 2023 court filing, Singular Computing specifically called out Google's use ofbfloat16, as that exceeds the dynamic range offloat16.[61]Singular claims non-standard floating point formats werenon-obviousin 2009, but Google retorts that the VFLOAT[62]format, with configurable number of exponent bits, existed asprior artin 2002.[63]By January 2024, subsequent lawsuits by Singular had brought the number of patents being litigated up to eight. Towards the end of the trial later that month, Google agreed to a settlement with undisclosed terms.[64][65]
|
https://en.wikipedia.org/wiki/Tensor_Processing_Unit
|
On anEthernetconnection, aduplex mismatchis a condition where two connected devices operate in differentduplex modes, that is, one operates in half duplex while the other one operates in full duplex. The effect of a duplex mismatch is a link that operates inefficiently. Duplex mismatch may be caused by manually setting two connected network interfaces at different duplex modes or by connecting a device that performsautonegotiationto one that is manually set to a full duplex mode.[1]
When a device set to autonegotiation is connected to a device that is not using autonegotiation, the autonegotiation process fails. The autonegotiating end of the connection is still able to correctly detect the speed of the other end, but cannot correctly detect the duplex mode. For backward compatibility withEthernet hubs, the standard requires the autonegotiating device to use half duplex in these conditions. Therefore, the autonegotiating end of the connection uses half duplex while the non-negotiating peer is locked at full duplex, and this is a duplex mismatch.
The Ethernet standards and major Ethernet equipment manufacturers recommend enabling autonegotiation.[2][3][4]Nevertheless, network equipment allows autonegotiation to be disabled and on some networks, autonegotiation is disabled on all ports and a fixed modality of 100 Mbit/s and full duplex is used. That was often done by network administrators intentionally upon the introduction of autonegotiation, because ofinteroperability issueswith the initial autonegotiation specification. The fixed mode of operation works well if both ends of a connection are locked to the same settings. However, maintaining such a network and guaranteeing consistency is difficult. Since autonegotiation is generally the manufacturer’s default setting it is almost certain that, in an environment where the policy is to have fixed port settings, someone will sooner or later leave a port set to use autonegotiation by mistake.[5]
Communicationispossible over a connection in spite of a duplex mismatch. Single packets are sent and acknowledged without problems. As a result, a simplepingcommand fails to detect a duplex mismatch because single packets and their resulting acknowledgments at 1-second intervals do not cause any problem on the network. A terminal session which sends data slowly (in very short bursts) can also communicate successfully. However, as soon as either end of the connection attempts to send any significant amount of data, the network suddenly slows to very low speed. Since the network is otherwise working, the cause is not so readily apparent.
A duplex mismatch causes problems when both ends of the connection attempt to transfer data at the same time. This happens even if the channel is used (from a high-level or user's perspective) in one direction only, in case of large data transfers. Indeed, when a large data transfer is sent over aTCP, data is sent in multiple packets, some of which will trigger an acknowledgment packet back to the sender. This results in packets being sent in both directions at the same time.
In such conditions, the full-duplex end of the connection sends its packets while receiving other packets; this is exactly the point of a full-duplex connection. Meanwhile, the half-duplex end cannot accept the incoming data while it is sending – it will sense it as acollision. The half-duplex device ceases its current data transmission, sends a jam signal instead and then retries later as perCSMA/CD. This results in the full-duplex side receiving an incomplete frame with CRC error or arunt frame. It does not detect any collision since CSMA/CD is disabled on the full-duplex side. As a result, when both devices are attempting to transmit at (nearly) the same time, the packet sent by the full-duplex end will be discarded and lost due to an assumed collision and the packet sent by the half duplex device will be delayed or lost due to a CRC error in the frame.[6]
The lost packets force the TCP protocol to perform error recovery, but the initial (streamlined) recovery attempts fail because the retransmitted packets are lost in exactly the same way as the original packets. Eventually, the TCP transmission window becomes full and the TCP protocol refuses to transmit any further data until the previously-transmitted data is acknowledged. This, in turn, willquiescethe new traffic over the connection, leaving only the retransmissions and acknowledgments. Since the retransmission timer grows progressively longer between attempts, eventually a retransmission will occur when there is no reverse traffic on the connection, and the acknowledgment are finally received. This will restart the TCP traffic, which in turn immediately causes lost packets as streaming resumes.
The end result is a connection that is working but performsextremelypoorly because of the duplex mismatch. Symptoms of a duplex mismatch are connections that seem to work fine with apingcommand, but "lock up" easily with very low throughput on data transfers; the effective data transfer rate is likely to be asymmetrical, performing much worse in the half-duplex to full-duplex direction than the other. In normal half-duplex operationslate collisionsdo not occur. However, in a duplex mismatch the collisions seen on the half-duplex side of the link are often late collisions. The full-duplex side usually will registerframe check sequenceerrors, orrunt frames.[7][8]Viewing these standard Ethernet statistics can help diagnose the problem.
Contrary to what one might reasonably expect, both sides of a connection need to be identically configured for proper operation. In other words, setting one side to automatic (either speed or duplex or both) and setting the other to be fixed (either speed or duplex or both) will likely result in either a speed mismatch, a duplex mismatch or both. A duplex mismatch can be fixed by either enabling autonegotiation (if available and working) on both ends or by forcing the same settings on both ends (availability of a configuration interface permitting). If there is no option but to have a locked setting on one end and autonegotiation the other (for example, an old device with broken autonegotiation connected to an unmanaged switch) half duplex must be used. All modern LAN equipment comes with autonegotiation enabled and the various compatibility issues have been resolved. The best way to avoid duplex mismatches is to use autonegotiation and to replace any legacy equipment that does not use autonegotiation or does not autonegotiate correctly.
|
https://en.wikipedia.org/wiki/Duplex_mismatch
|
Incryptography, anSP-network, orsubstitution–permutation network(SPN), is a series of linked mathematical operations used inblock cipheralgorithms such asAES (Rijndael),3-Way,Kalyna,Kuznyechik,PRESENT,SAFER,SHARK, andSquare.
Such a network takes a block of theplaintextand thekeyas inputs, and applies several alternatingroundsorlayersofsubstitution boxes(S-boxes) andpermutation boxes(P-boxes) to produce theciphertextblock. The S-boxes and P-boxes transform(sub-)blocksof inputbitsinto output bits. It is common for these transformations to be operations that are efficient to perform in hardware, such asexclusive or(XOR) andbitwise rotation. The key is introduced in each round, usually in the form of "round keys" derived from it. (In some designs, theS-boxesthemselves depend on the key.)
Decryptionis done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).
AnS-boxsubstitutes a small block of bits (the input of the S-box) by another block of bits (the output of the S-box). This substitution should beone-to-one, to ensure invertibility (hence decryption). In particular, the length of the output should be the same as the length of the input (the picture on the right has S-boxes with 4 input and 4 output bits), which is different from S-boxes in general that could also change the length, as inData Encryption Standard(DES), for example. An S-box is usually not simply apermutationof the bits. Rather, in a good S-box each output bit will be affected by every input bit. More precisely, in a good S-box each output bit will be changed with 50% probability by every input bit. Since each output bit changes with the 50% probability, about half of the output bits will actually change with an input bit change (cf.Strict avalanche criterion).[1]
AP-boxis apermutationof all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible.
At each round, theround key(obtained from thekeywith some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typicallyXOR.
A single typical S-box or a single P-box alone does not have much cryptographic strength: an S-box could be thought of as asubstitution cipher, while a P-box could be thought of as atransposition cipher. However, a well-designed SP network with several alternating rounds of S- and P-boxes already satisfiesShannon'sconfusion and diffusionproperties:
Although aFeistel networkthat uses S-boxes (such asDES) is quite similar to SP networks, there are some differences that make either this or that more applicable in certain situations. For a given amount ofconfusion and diffusion, an SP network has more "inherent parallelism"[2]and so — given a CPU with manyexecution units— can be computed faster than a Feistel network.[3]CPUs with few execution units — such as mostsmart cards— cannot take advantage of this inherent parallelism. Also SP ciphers require S-boxes to be invertible (to perform decryption); Feistel inner functions have no such restriction and can be constructed asone-way functions.
|
https://en.wikipedia.org/wiki/Substitution%E2%80%93permutation_network
|
Absurdismis thephilosophicaltheory that the universe isirrationaland meaningless. It states that trying to find meaning leads people into conflict with a seemingly meaningless world. This conflict can be betweenrationalman and an irrational universe, betweenintentionand outcome, or betweensubjectiveassessment and objective worth, but the precise definition of the term is disputed. Absurdism claims that, due to one or more of these conflicts, existenceas a wholeisabsurd. It differs in this regard from the less global thesis that someparticularsituations, persons, or phases in life are absurd.
Various components of the absurd are discussed in the academic literature, and different theorists frequently concentrate their definition and research on different components. On the practical level, the conflict underlying the absurd is characterized by the individual's struggle to find meaning in a meaningless world. The theoretical component, on the other hand, emphasizes more theepistemicinability of reason to penetrate and understandreality. Traditionally, the conflict is characterized as a collision between an internal component of human nature, and an external component of the universe. However, some later theorists have suggested that both components may be internal: the capacity to see through the arbitrariness of any ultimate purpose, on the one hand, and the incapacity to stop caring about such purposes, on the other hand. Certain accounts also involve ametacognitivecomponent by holding that anawarenessof the conflict is necessary for the absurd to arise.
Some arguments in favor of absurdism focus on the human insignificance in the universe, on the role ofdeath, or on the implausibility or irrationality of positing an ultimate purpose. Objections to absurdism often contend that life is in fact meaningful or point out certain problematic consequences or inconsistencies of absurdism. Defenders of absurdism often complain that it does not receive the attention of professional philosophers it merits in virtue of the topic's importance and its potential psychological impact on the affected individuals in the form ofexistential crises. Various possible responses to deal with absurdism and its impact have been suggested. The three responses discussed in the traditional absurdist literature aresuicide,religious beliefin a higher purpose, and rebellion against the absurd. Of these, rebellion is usually presented as the recommended response since, unlike the other two responses, it does not escape the absurd and instead recognizes it for what it is. Later theorists have suggested additional responses, like usingironyto take life less seriously or remaining ignorant of the responsible conflict. Some absurdists argue that whether and how one responds is insignificant. This is based on the idea that if nothing really matters then the human response toward this fact does not matter either.
The term "absurdism" is most closely associated with the philosophy ofAlbert Camus. However, important precursors and discussions of the absurd are also found in the works ofSøren Kierkegaard. Absurdism is intimately related to various other concepts and theories. Its basic outlook is inspired byexistentialistphilosophy. However, existentialism includes additional theoretical commitments and often takes a more optimistic attitude toward the possibility of finding or creating meaning in one's life. Absurdism andnihilismshare the belief that life is meaningless, but absurdists do not treat this as an isolated fact and are instead interested in the conflict between the humandesirefor meaning and the world's lack thereof. Being confronted with this conflict may trigger an existential crisis, in which unpleasantexperienceslikeanxietyordepressionmay push the affected to find a response for dealing with the conflict. Recognizing the absence of objective meaning, however, does not preclude the conscious thinker from finding subjective meaning.
Absurdism is thephilosophicalthesis that life, or the world in general, is absurd. There is wide agreement that the term "absurd" implies a lack ofmeaningor purpose but there is also significant dispute concerning its exact definition and various versions have been suggested.[1][2][3][4][5]The choice of one's definition has important implications for whether the thesis of absurdism is correct and for the arguments cited for and against it: it may be true on one definition and false on another.[6]
In a general sense, the absurd is that which lacks a sense, often because it involves some form ofcontradiction. The absurd is paradoxical in the sense that it cannot be grasped byreason.[7][8][9]But in the context of absurdism, the term is usually used in a more specific sense. According to most definitions, it involves a conflict, discrepancy, or collision between two things. Opinions differ on what these two things are.[1][2][3][4]For example, it is traditionally identified as the confrontation ofrationalman with an irrational world or as the attempt to grasp something based on reasons even though it is beyond the limits of rationality.[10][11]Similar definitions see the discrepancy betweenintentionand outcome, between aspiration andreality, or between subjective assessment and objective worth as the source of absurdity.[1][3]Other definitions locate both conflicting sides within man: the ability to apprehend the arbitrariness of final ends and the inability to let go of commitments to them.[4]In regard to the conflict, absurdism differs fromnihilismsince it is not just the thesis that nothing matters. Instead, it includes the component that things seem to matter to us nonetheless and that this impression cannot be shaken off. This difference is expressed in the relational aspect of the absurd in that it constitutes a conflict between two sides.[4][1][2]
Various components of the absurd have been suggested and different researchers often focus their definition and inquiry on one of these components. Some accounts emphasize the practical components concerned with the individual seeking meaning while others stress the theoretical components about being unable toknowthe world or to rationally grasp it. A different disagreement concerns whether the conflict exists only internal to the individual or is between the individual's expectations and theexternal world. Some theorists also include themetacognitivecomponent that the absurd entails that the individual is aware of this conflict.[2][3][12][4]
An important aspect of absurdism is that the absurd is not limited to particular situations but encompasses life as a whole.[2][1][13]There is a general agreement that people are often confronted with absurd situations in everyday life.[7]They often arise when there is a serious mismatch between one's intentions and reality.[2]For example, a person struggling to break down a heavy front door is absurd if the house they are trying to break into lacks a back wall and could easily be entered on this route.[1]But the philosophical thesis of absurdism is much more wide-reaching since it is not restricted to individual situations, persons, or phases in life. Instead, it asserts that life, or the world as a whole, is absurd. The claim that the absurd has such a global extension is controversial, in contrast to the weaker claim that some situations are absurd.[2][1][13]
The perspective of absurdism usually comes into view when the agent takes a step back from their individual everyday engagements with the world to assess their importance from a bigger context.[4][2][14]Such an assessment can result in the insight that the day-to-day engagements matter a lot to us despite the fact that they lack real meaning when evaluated from a wider perspective. This assessment reveals the conflict between the significance seen from the internal perspective and the arbitrariness revealed through the external perspective.[4]The absurd becomes a problem since there is a strongdesirefor meaning and purpose even though they seem to be absent.[7]In this sense, the conflict responsible for the absurd often either constitutes or is accompanied by anexistential crisis.[15][14]
An important component of the absurd on the practical level concerns the seriousness people bring toward life. This seriousness is reflected in many different attitudes and areas, for example, concerning fame,pleasure,justice, knowledge, or survival, both in regard to ourselves as well as in regard to others.[2][8][14]But there seems to be a discrepancy between how seriously we take our lives and the lives of others on the one hand, and how arbitrary they and the world at large seem to be on the other hand. This can be understood in terms ofimportanceand caring: it is absurd that people continue to care about these matters even though they seem to lack importance on an objective level.[16][17]The collision between these two sides can be defined as the absurd. This is perhaps best exemplified when the agent is seriously engaged in choosing between arbitrary options, none of which truly matters.[2][3]
Some theorists characterize theethicalsides of absurdism and nihilism in the same way as the view that it does not matter how we act or that "everything is permitted."[8]On this view, an important aspect of the absurd is that whatever higher end or purpose we choose to pursue, it can also be put into doubt since, in the last step, it always lacks a higher-order justification.[2][1]But usually, a distinction between absurdism and nihilism is made since absurdism involves the additional component that there is a conflict between man's desire for meaning and the absence of meaning.[18][14]
On a more theoretical view, absurdism is thebeliefthat the world is, at its core, indifferent and impenetrable toward human attempts to uncover its deeper reason or that it cannot be known.[12][10]According to this theoretical component, it involves theepistemologicalproblem of the human limitations of knowing the world.[12]This includes the thesis that the world is in critical ways ungraspable to humans, both in relation to what to believe and how to act.[12][10]This is reflected in the chaos and irrationality of the universe, which acts according to its own laws in a manner indifferent to human concerns and aspirations. It is closely related to the idea that the world remains silent when we ask why things are the way they are. This silence arises from the impression that, on the most fundamental level, all things exist without a reason: they are simply there.[12][19][20]An important aspect of these limitations to knowing the world is that they are essential tohuman cognition, i.e. they are not due to following false principles or accidental weaknesses but are inherent in the human cognitive faculties themselves.[12]
Some theorists also link this problem to thecircularity of human reason, which is very skilled at producing chains of justification linking one thing to another while trying and failing to do the same for the chain of justification as a whole when taking a reflective step backward.[2][14]This implies that human reason is not just too limited to grasp life as a whole but that, if one seriously tried to do so anyway, its ungrounded circularity might collapse and lead to madness.[2]
An important disagreement within the academic literature about the nature of absurdism and the absurd focuses specifically on whether the components responsible for the conflict are internal or external.[1][2][3][4]According to the traditional position, the absurd has both internal and external components: it is due to the discrepancy between man's internal desire to lead ameaningful lifeand the external meaninglessness of the world. In this view, humans have, among their desires, some transcendent aspirations that seek a higher form of meaning in life. The absurd arises since these aspirations are ignored by the world, which is indifferent to our "need for validation of the importance of our concerns."[1][3]This implies that the absurd "is not in man ... nor in the world, but in their presence together. " This position has been rejected by some later theorists, who hold that the absurd is purely internal because it "derives not from a collision between our expectations and the world, but from a collision within ourselves".[1][2][4][6]
The distinction is important since, on the latter view, the absurd is built into human nature and would prevail no matter what the world was like. So, it is not just that absurdism is true in the actual world. Instead, anypossible world, even one that was designed by a divine god and guided by them according to their higher purpose, would still be equally absurd to man. In this sense, absurdity is the product of the power of ourconsciousnessto take a step back from whatever it is considering and reflect on the reason of its object. When this process is applied to the world as a whole including God, it is bound to fail its search for a reason or an explanation, no matter what the world is like.[1][2][14]In this sense, absurdity arises from the conflict between features of ourselves: "our capacity to recognize the arbitrariness of our ultimate concerns and our simultaneous incapacity to relinquish our commitment to them".[4]This view has the side-effect that the absurd depends on the fact that the affected person recognizes it. For example, people who fail to apprehend the arbitrariness or the conflict would not be affected.[1][2][14]
According to some researchers, a central aspect of the absurd is that the agent isawareof the existence of the corresponding conflict. This means that the person is conscious both of the seriousness they invest and of how it seems misplaced in an arbitrary world.[2][14]It also implies that other entities that lack this form of consciousness, like non-organic matter or lower life forms, are not absurd and are not faced with this particular problem.[2]Some theorists also emphasize that the conflict remains despite the individual's awareness of it, i.e. that the individual continues to care about their everyday concerns despite their impression that, on the large scale, these concerns are meaningless.[4]Defenders of themetacognitivecomponent have argued that it manages to explain why absurdity is primarily ascribed to human aspirations but not to lower animals: because they lack this metacognitive awareness. However, other researchers reject the metacognitive requirement based on the fact that it would severely limit the scope of the absurd to only those possibly few individuals who clearly recognize the contradiction while sparing the rest. Thus, opponents have argued that not recognizing the conflict is just as absurd as consciously living through it.[1][2][14]
Various popular arguments are often cited in favor of absurdism. Some focus on the future by pointing out that nothing we do today will matter in a million years.[2][14]A similar line of argument points to the fact that our lives are insignificant because of how small they are in relation to the universe as a whole, both concerning their spatial and their temporal dimensions. The thesis of absurdism is also sometimes based on the problem ofdeath, i.e. that there is no final end for us to pursue since we are all going to die.[2][20]In this sense, death is said to destroy all our hard-earned achievements like career, wealth, or knowledge. This argument is mitigated to some extent by the fact that we may have positive or negative effects on the lives of other people as well. But this does not fully solve the issue since the same problem, i.e. the lack of an ultimate end, applies to their lives as well.[2]Thomas Nagelhas objected to these lines of argument based on the claim that they are circular: they assume rather than establish that life is absurd. For example, the claim that our actions today will not matter in a million years does not directly imply that they do not matter today. And similarly, the fact that a process does not reach a meaningful ultimate goal does not entail that the process as a whole is worthless since some parts of the process may contain their justification without depending on a justification external to them.[2][14]
Another argument proceeds indirectly by pointing out how various great thinkers have obvious irrational elements in their systems of thought. These purported mistakes of reason are then taken as signs of absurdism that were meant to hide or avoid it.[12][21]From this perspective, the tendency to posit the existence of a benevolent God may be seen as a form ofdefense mechanismorwishful thinkingto avoid an unsettling and inconvenient truth.[12]This is closely related to the idea that humans have an inborn desire for meaning and purpose, which is dwarfed by a meaningless and indifferent universe.[22][23][24]For example,René Descartesaims to build a philosophical system based on the absolute certainty of the "I think, therefore I am" just to introduce without a proper justification the existence of a benevolent and non-deceiving God in a later step in order to ensure that we can know about the external world.[12][25]A similar problematic step is taken byJohn Locke, who accepts the existence of a God beyondsensory experience, despite his strictempiricism, which demands that all knowledge be based on sensory experience.[12][26]
Other theorists argue in favor of absurdism based on the claim that meaning isrelational. In this sense, for something to be meaningful, it has to stand in relation to something else that is meaningful.[4][21]For example, a word is meaningful because of its relation to a language or someone's life could be meaningful because this person dedicates their efforts to a higher meaningful project, like serving God or fighting poverty. An important consequence of this characterization of meaning is that it threatens to lead to aninfinite regress:[4][21]at each step, something is meaningful because something else is meaningful, which in its turn has meaning only because it is related to yet another meaningful thing, and so on.[27][28]This infinite chain and the corresponding absurdity could be avoided if some things had intrinsic or ultimate meaning, i.e. if their meaning did not depend on the meaning of something else.[4][21]For example, if things on the large scale, like God or fighting poverty, had meaning, then our everyday engagements could be meaningful by standing in the right relation to them. However, if these wider contexts themselves lack meaning then they are unable to act as sources of meaning for other things. This would lead to the absurd when understood as the conflict between the impression that our everyday engagements are meaningful even though they lack meaning because they do not stand in a relation to something else that is meaningful.[4]
Another argument for absurdism is based on the attempt of assessing standards of what matters and why it matters. It has been argued that the only way to answer such a question is in reference to these standards themselves. This means that, in the end, it depends only on us, that "what seems to us important or serious or valuable would not seem so if we were differently constituted". The circularity and groundlessness of these standards themselves are then used to argue for absurdism.[2][14]
The most common criticism of absurdism is to argue that life in fact has meaning.Supernaturalistarguments to this effect are based on the claim that God exists and acts as the source of meaning. Naturalist arguments, on the other hand, contend that various sources of meaning can be found in the natural world without recourse to a supernatural realm. Some of them hold that meaning is subjective. On this view, whether a given thing is meaningful varies from person to person based on their subjective attitude toward this thing. Others find meaning in external values, for example, inmorality, knowledge, orbeauty. All these different positions have in common that they affirm the existence of meaning, in contrast to absurdism.[29][30][21]
Another criticism of absurdism focuses on its negative attitude toward moral values. In the absurdist literature, the moral dimension is sometimes outright denied, for example, by holding that value judgments are to be discarded or that the rejection of God implies the rejection of moral values.[3]On this view, absurdism brings with it a highly controversial form ofmoral nihilism. This means that there is a lack, not just of a higher purpose in life, but also of moral values. These two sides can be linked by the idea that without a higher purpose, nothing is worth pursuing that could give one's life meaning. This worthlessness seems to apply to morally relevant actions equally as to other issues.[3][8]In this sense, "[b]elief in the meaning of life always implies a scale of values" while "[b]elief in the absurd ... teaches the contrary".[31]Various objections to such a position have been presented, for example, that it violatescommon senseor that it leads to numerous radical consequences, like that no one is ever guilty of any blameworthy behavior or that there are no ethical rules.[3][32]
But this negative attitude toward moral values is not always consistently maintained by absurdists and some of the suggested responses on how to deal with the absurd seem to explicitly defend the existence of moral values.[3][20][33]Due to this ambiguity, other critics of absurdism have objected to it based on its inconsistency.[3]The moral values defended by absurdists often overlap with the ethical outlook ofexistentialismand include traits likesincerity,authenticity, andcourageasvirtues.[34][35]In this sense, absurdists often argue that it matters how the agent faces the absurdity of their situation and that the response should exemplify these virtues. This aspect is particularly prominent in the idea that the agent should rebel against the absurd and live their life authentically as a form of passionate revolt.[3][12][10]
Some see the latter position as inconsistent with the idea that there is no meaning in life: if nothing matters then it should also not matter how we respond to this fact.[3][2][1][4]So absurdists seem to be committed both to the claim that moral values exist and that they do not exist. Defenders of absurdism have tried to resist this line of argument by contending that, in contrast to other responses, it remains true to the basic insight of absurdism and the "logic of the absurd" by acknowledging the existence of the absurd instead of denying it.[3][36]But this defense is not always accepted. One of its shortcomings seems to be that it commits theis-ought fallacy: absurdism presents itself as a descriptive claim about the existence and nature of the absurd but then goes on to posit various normative claims.[3][37]Another defense of absurdism consists in weakening the claims about how one should respond to the absurd and which virtues such a response should exemplify. On this view, absurdism may be understood as a form ofself-helpthat merely provides prudential advice. Such prudential advice may be helpful to certain people without pretending to have the status of universally valid moral values or categorical normative judgments. So the value of the prudential advice may merely be relative to the interests of some people but not valuable in a more general sense. This way, absurdists have tried to resolve the apparent inconsistency in their position.[3]
According to absurdism, life in general is absurd: the absurd is not just limited to a few specific cases. Nonetheless, some cases are more paradigmatic examples than others.The Myth of Sisyphusis often treated as a key example of the absurd.[10][3]In it,ZeuspunishesKing Sisyphusby compelling him to roll a massive boulder up a hill. Whenever the boulder reaches the top, it rolls down again, thereby forcing Sisyphus to repeat the same task all over again throughout eternity. This story may be seen as an absurdistparablefor the hopelessness and futility of human life in general: just like Sisyphus, humans in general are condemned to toil day in and day out in the attempt to fulfill pointless tasks, which will be replaced by new pointless tasks once they are completed. It has been argued that a central aspect of Sisyphus' situation is not just the futility of his labor but also his awareness of the futility.[10][38][3]
Another example of the absurdist aspect of the human condition is given inFranz Kafka'sThe Trial.[39][40]In it, the protagonist Josef K. is arrested and prosecuted by an inaccessible authority even though he is convinced that he has done nothing wrong. Throughout the story, he desperately tries to discover what crimes he is accused of and how to defend himself. But in the end, he lets go of his futile attempts and submits to his execution without ever finding out what he was accused of. The absurd nature of the world is exemplified by the mysterious and impenetrable functioning of the judicial system, which seems indifferent to Josef K. and resists all of his attempts of making sense of it.[41][39][40]
Philosophers of absurdism often complain that the topic of the absurd does not receive the attention of professional philosophers it merits, especially when compared to other perennial philosophical areas of inquiry. It has been argued, for example, that this can be seen in the tendency of various philosophers throughout the ages to include the epistemically dubitable existence of God in their philosophical systems as a source of ultimate explanation of the mysteries of existence. In that regard, this tendency may be seen as a form of defense mechanism or wishful thinking constituting a side-effect of the unacknowledged and ignored importance of the absurd.[12][21]While some discussions of absurdism happen explicitly in the philosophical literature, it is often presented in a less explicit manner in the form of novels or plays. These presentations usually happen by telling stories that exemplify some of the key aspects of absurdism even though they may not explicitly discuss the topic.[10][3]
It has been argued that acknowledging the existence of the absurd has important consequences for epistemology, especially in relation to philosophy but also when applied more widely to other fields.[12][10]The reason for this is that acknowledging the absurd includes becoming aware of human cognitive limitations and may lead to a form of epistemic humbleness.[12]
The impression that life is absurd may in some cases have serious psychological consequences like triggering an existential crisis. In this regard, an awareness both of absurdism itself and the possible responses to it can be central to avoiding or resolving such consequences.[3][15][14]
... in spite of or in defiance of the whole of existence he wills to be himself with it, to take it along, almost defying his torment. For to hope in the possibility of help, not to speak of help by virtue of the absurd, that for God all things are possible—no, that he will not do. And as for seeking help from any other—no, that he will not do for all the world; rather than seek help he would prefer to be himself—with all the tortures of hell, if so it must be.
Most researchers argue that the basic conflict posed by the absurd cannot be truly resolved. This means that any attempt to do so is bound to fail even though their protagonists may not be aware of their failure. On this view, there are still several possible responses, some better than others, but none able to solve the fundamental conflict. Traditional absurdism, as exemplified byAlbert Camus, holds that there are three possible responses to absurdism:suicide,religious belief, or revolting against the absurd.[10][3]Later researchers have suggested more ways of responding to absurdism.[2][4][14]
A very blunt and simple response, though quite radical, is to commit suicide.[13]According to Camus, for example, the problem of suicide is the only "really serious philosophical problem". It consists in seeking an answer to the question "Should I kill myself?".[20]This response is motivated by the insight that, no matter how hard the agent tries, they may never reach their goal of leading a meaningful life, which can then justify the rejection of continuing to live at all.[3]Most researchers acknowledge that this is one form of response to the absurd but reject it due to its radical and irreversible nature and argue instead for a different approach.[13][20]
One such alternative response to the apparent absurdity of life is to assume that there is some higher ultimate purpose in which the individual may participate, like service to society,progressof history, or God's glory.[2][3][13]While the individual may only play a small part in the realization of this overarching purpose, it may still act as a source of meaning. This way, the individual may find meaning and thereby escape the absurd. One serious issue with this approach is that the problem of absurdity applies to this alleged higher purpose as well. So just like the aims of a single individual life can be put into doubt, this applies equally to a larger purpose shared by many.[4][21]And if this purpose is itself absurd, it fails to act as a source of meaning for the individual participating in it. Camus identifies this response as a form of suicide as well, pertaining not to the physical but to the philosophical level. It is a philosophical suicide in the sense that the individual just assumes that the chosen higher purpose is meaningful and thereby fails to reflect on its absurdity.[2][3]
Traditional absurdists usually reject both physical and philosophical suicide as the recommended response to the absurd, usually with the argument that both these responses constitute some form of escape that fails to face the absurd for what it is. Despite the gravity and inevitability of the absurd, they recommend that we should face it directly, i.e. not escape from it by retreating into the illusion of false hope or by ending one's life.[12][10][1]In this sense, accepting the reality of the absurd means rejecting any hopes for a happyafterlifefree of those contradictions.[10][2]Instead, the individual should acknowledge the absurd and engage in a rebellion against it.[12][10][1]Such a revolt usually exemplifies certain virtues closely related toexistentialism, like the affirmation of one'sfreedomin the face of adversity as well as acceptingresponsibilityand defining one's ownessence.[12][3]An important aspect of this lifestyle is that life is lived passionately and intensely by inviting and seeking newexperiences. Such a lifestyle might be exemplified by anactor, a conqueror, or aseduction artistwho is constantly on the lookout for new roles, conquests, or attractive people despite their awareness of the absurdity of these enterprises.[10][43]Another aspect lies increativity, i.e. that the agent sees themselves as and acts as the creator of their own works and paths in life. This constitutes a form of rebellion in the sense that the agent remains aware of the absurdity of the world and their part in it but keeps on opposing it instead of resigning and admitting defeat.[10]But this response does not solve the problem of the absurd at its core: even a life dedicated to the rebellion against the absurd is itself still absurd.[2][1]Defenders of the rebellious response to absurdism have pointed out that, despite its possible shortcomings, it has one important advantage over many of its alternatives: it manages to accept the absurd for what it is without denying it by rejecting that it exists or by stopping one's own existence. Some even hold that it is the only philosophically coherent response to the absurd.[3]
While these three responses are the most prominent ones in the traditional absurdist literature, various other responses have also been suggested. Instead of rebellion, for example, absurdism may also lead to a form ofirony. This irony is not sufficient to escape the absurdity of life altogether, but it may mitigate it to some extent by distancing oneself to some degree from the seriousness of life.[2][1][4][14]According toThomas Nagel, there may be, at least theoretically, two responses to actually resolving the problem of the absurd. This is based on the idea that the absurd arises from the consciousness of a conflict between two aspects of human life: that humans care about various things and that the world seems arbitrary and does not merit this concern.[4][2][14]The absurd would not arise if either of the conflicting elements would cease to exist, i.e. if the individual would stop caring about things, as someEastern religionsseem to suggest, or if one could find something that possesses a non-arbitrary meaning that merits the concern. For theorists who give importance to theconsciousnessof this conflict for the absurd, a further option presents itself: to remain ignorant of it to the extent that this is possible.[4][2][14]
Other theorists hold that a proper response to the absurd may neither be possible nor necessary, that it just remains one of the basic aspects of life no matter how it is confronted. This lack of response may be justified through the thesis of absurdism itself: if nothing really matters on the grand scale, then this applies equally to human responses toward this fact. From this perspective, the passionate rebellion against an apparently trivial or unimportant state of affairs seems less like a heroic quest and more like afool's errand.[2][1][4]Jeffrey Gordon has objected to this criticism based on the claim that there is a difference between absurdity and lack of importance. So even if life as a whole is absurd, some facts about life may still be more important than others and the fact that life as a whole is absurd would be a good candidate for the more important facts.[1]
Absurdism has its origins in the work of the 19th-centuryDanishphilosopherSøren Kierkegaard, who chose to confront the crisis that humans face with the Absurd by developing his ownexistentialist philosophy.[44]Absurdism as a belief system was born of the European existentialist movement that ensued, specifically when Camus rejected certain aspects of that philosophical line of thought[45]and published his essayThe Myth of Sisyphus. The aftermath ofWorld War IIprovided the social environment that stimulated absurdist views and allowed for their popular development, especially in the devastated country ofFrance.FoucaultviewedShakespearean theateras a precursor of absurdism.[46]
An idea very close to the concept of the absurd is due toImmanuel Kant, who distinguishes betweenphenomenaandnoumena.[12]This distinction refers to the gap between how things appear to us and what they are like in themselves. For example, according to Kant, space and times are dimensions belonging to the realm of phenomena since this is how sensory impressions are organized by themind, but may not be found on the level of noumena.[47][48]The concept of the absurd corresponds to the thesis that there is such a gap and human limitations may limit the mind from ever truly grasping reality, i.e. that reality in this sense remains absurd to the mind.[12]
A century beforeCamus, the 19th-century Danish philosopherSøren Kierkegaardwrote extensively about the absurdity of the world. In his journals, Kierkegaard writes about the absurd:
What is the Absurd? It is, as may quite easily be seen, that I, a rational being, must act in a case where my reason, my powers of reflection, tell me: you can just as well do the one thing as the other, that is to say where my reason and reflection say: you cannot act and yet here is where I have to act... The Absurd, or to act by virtue of the absurd, is to act upon faith ... I must act, but reflection has closed the road so I take one of the possibilities and say: This is what I do, I cannot do otherwise because I am brought to a standstill by my powers of reflection.[50]
Here is another example of the Absurd from his writings:
What, then, is the absurd? The absurd is that the eternal truth has come into existence in time, that God has come into existence, has been born, has grown up. etc., has come into existence exactly as an individual human being, indistinguishable from any other human being, in as much as all immediate recognizability is pre-Socratic paganism and from the Jewish point of view is idolatry.
How can this absurdity be held or believed? Kierkegaard says:
I gladly undertake, by way of brief repetition, to emphasize what other pseudonyms have emphasized. The absurd is not the absurd or absurdities without any distinction (wherefore Johannes de Silentio: "How many of our age understand what the absurd is?"). The absurd is a category, and the most developed thought is required to define the Christian absurd accurately and with conceptual correctness. The absurd is a category, the negative criterion, of the divine or of the relationship to the divine. When the believer has faith, the absurd is not the absurd—faith transforms it, but in every weak moment it is again more or less absurd to him. The passion of faith is the only thing which masters the absurd—if not, then faith is not faith in the strictest sense, but a kind of knowledge. The absurd terminates negatively before the sphere of faith, which is a sphere by itself. To a third person the believer relates himself by virtue of the absurd; so must a third person judge, for a third person does not have the passion of faith. Johannes de Silentio has never claimed to be a believer; just the opposite, he has explained that he is not a believer—in order to illuminate faith negatively.
Kierkegaard provides an example inFear and Trembling(1843), which was published under the pseudonymJohannes de Silentio. In the story ofAbrahamin theBook of Genesis, Abraham is told byGodtokill his sonIsaac. Just as Abraham is about to kill Isaac, an angel stops Abraham from doing so. Kierkegaard believes that through virtue of the absurd, Abraham, defying all reason and ethical duties ("you cannot act"), got back his son and reaffirmed his faith ("where I have to act").[52]
Another instance of absurdist themes in Kierkegaard's work appears inThe Sickness Unto Death, which Kierkegaard signed with pseudonymAnti-Climacus. Exploring the forms of despair, Kierkegaard examines the type of despair known as defiance.[53]In the opening quotation reproduced at the beginning of the article, Kierkegaard describes how such a man would endure such a defiance and identifies the three major traits of the Absurd Man, later discussed by Albert Camus: a rejection of escaping existence (suicide), a rejection of help from a higher power and acceptance of his absurd (and despairing) condition.
According to Kierkegaard in his autobiographyThe Point of View of My Work as an Author, most of his pseudonymous writings are not necessarily reflective of his own opinions. Nevertheless, his work anticipated many absurdist themes and provided its theoretical background.
The philosophy of Albert Camus, or more precisely the “camusian absurd” (French:l'absurde camusien), refers with absurdism to the work and philosophical thought of the French writerAlbert Camus. This philosophy is influenced by the author's political,libertarian, social and ecological ideas; and is inspired by previous philosophical trends, such asGreek philosophy,nihilism, theNietzschean thoughtorexistentialism. It revolves around three major cycles: “the absurd (l'absurde)”, “the revolt (la révolte)” and “love (l'amour)”. Each cycle is linked to a Greek myth (Sisyphus,Prometheus,Nemesis) and explores specific themes and objects; the common thread remaining the solitude and despair of the human, constantly driven by the tireless search for the meaning of the world and of life.
I had a precise plan when I started my work: I wanted to first express negation. In three forms. Romanesque: it wasThe Stranger. Drama:CaligulaandThe Misunderstanding. Ideological:The Myth of Sisyphus. I wouldn't have been able to talk about it if I hadn't experienced it; I have no imagination. But for me it was, if you like,the methodical doubtof Descartes. I knew that we cannot live in negation and I announced it in the preface to the Myth of Sisyphus; I anticipated the positive in all three forms again. Romance:The Plague. Drama:The State of SiegeandThe Righteous. Ideological:The Rebel. I already saw a third layer around the theme of love. These are the projects I have in progress
The cycle of the absurd, ornegation, primarily addressessuicideand thehuman condition. It is expressed through four of Camus's works: thenovelThe Strangerand theessayThe Myth of Sisyphus(1942), then the playsCaligulaandThe Misunderstanding(1944). By refusing the refuge of belief, Human becomes aware that his existence revolves around repetitive and meaningless acts. The certainty of death only reinforces, according to the writer, the feeling of uselessness of all existence. The absurd is therefore the feeling that man feels when confronted with the absence of meaning in the face of the Universe, the painful realization of his separation from the world. The question then arises of the legitimacy of suicide.
The cycle of revolt, calledthe positive, is a direct response to the absurd and is also expressed by four of his works: the novelThe Plague(1947), the playsThe State of Siege(1948) andThe Just Assassins(1949), then the essayThe Rebel(1951). Positive concept of affirmation of the individual, where only action and commitment count in the face of the tragedy of the world,revoltis for the writer the way of experiencing the absurd, knowing our fatal destiny and nevertheless facing it : “Man refuses the world as it is, without agreeing to escape it.” It is intelligence grappling with the “unreasonable silence of the world”. Depriving ourselves of eternal life frees us from the constraints imposed by an improbable future; Man gains freedom of action, lucidity and dignity.
The philosophy of Camus therefore has as its finitude a singularhumanism. Advancing a message of lucidity, resilience and emancipation in the face of the absurdity of life, it encourages people to create their own meanings through personal choices and commitments, and to embrace their freedom to the fullest. Because he affirms that, even in the absurd, there is room for passion and rebellion; and although the Universe may be indifferent to our search for meaning, this search isin itselfmeaningful. InThe Myth of Sisyphus, despite his absurd destiny, Sisyphus finds a form of liberation in his incessant work: “one must imagine Sisyphus happy”. With the cycle of love and the “midday thought” (French:la pensée de midi), the philosophy of the absurd is completed by a principle of measurement and pleasure, close toEpicureanism.
Though the notion of the 'absurd' pervades allAlbert Camus's writing,The Myth of Sisyphusis his chief work on the subject. In it, Camus considers absurdity as a confrontation, an opposition, a conflict or a "divorce" between two ideals. Specifically, he defines the human condition as absurd, as the confrontation between man's desire for significance, meaning and clarity on the one hand—and the silent, cold universe on the other. He continues that there are specific human experiences evoking notions of absurdity. Such a realization or encounter with the absurd leaves the individual with a choice:suicide, aleap of faith, or recognition. He concludes that recognition is the only defensible option.[57]
For Camus, suicide is a "confession" that life is not worth living; it is a choice that implicitly declares that life is "too much." Suicide offers the most basic "way out" of absurdity: the immediate termination of the self and its place in the universe.
The absurd encounter can also arouse a "leap of faith," a term derived from one of Kierkegaard's early pseudonyms,Johannes de Silentio(although the term was not used by Kierkegaard himself),[58]where one believes that there is more than the rational life (aesthetic or ethical). To take a "leap of faith," one must act with the "virtue of the absurd" (asJohannes de Silentioput it), where a suspension of the ethical may need to exist. This faith has no expectations, but is a flexible power initiated by a recognition of the absurd. Camus states that because the leap of faith escapes rationality and defers to abstraction over personal experience, the leap of faith is not absurd. Camus considers the leap of faith as "philosophical suicide," rejecting both this and physical suicide.[58][59]
Lastly, a person can choose to embrace the absurd condition. According to Camus, one's freedom—and the opportunity to give life meaning—lies in the recognition of absurdity. If the absurd experience is truly the realization that the universe is fundamentally devoid of absolutes, then we as individuals are truly free. "To live without appeal,"[60]as he puts it, is a philosophical move to define absolutes and universals subjectively, rather than objectively. The freedom of man is thus established in one's natural ability and opportunity to create their own meaning and purpose; to decide (or think) for oneself. The individual becomes the most precious unit of existence, representing a set of unique ideals that can be characterized as an entire universe in its own right. In acknowledging the absurdity of seeking any inherent meaning, but continuing this search regardless, one can be happy, gradually developing meaning from the search alone."Happiness and the absurd are two sons of the same earth. They are inseparable."[61]
Camus states inThe Myth of Sisyphus: "Thus I draw from the absurd three consequences, which are my revolt, my freedom, and my passion. By the mere activity of consciousness I transform into a rule of life what was an invitation to death, and I refuse suicide."[62]"Revolt" here refers to the refusal of suicide and search for meaning despite the revelation of the Absurd; "Freedom" refers to the lack of imprisonment by religious devotion or others' moral codes; "Passion" refers to the most wholehearted experiencing of life, since hope has been rejected, and so he concludes that every moment must be lived fully.
Absurdism originated from (as well as alongside) the 20th-century strains ofexistentialismandnihilism; it shares some prominent starting points with both, though also entails conclusions that are uniquely distinct from these other schools of thought. All three arose from the human experience of anguish and confusion stemming from existence: the apparent meaninglessness of a world in which humans, nevertheless, are compelled to find or create meaning.[63]The three schools of thought diverge from there. Existentialists have generally advocated the individual's construction of their own meaning in life as well as thefree willof the individual. Nihilists, on the contrary, contend that "it is futile to seek or to affirm meaning where none can be found."[64]Absurdists, following Camus' formulation, hesitantly allow the possibility for some meaning or value in life, but are neither as certain as existentialists are about the value of one's own constructed meaning nor as nihilists are about the total inability to create meaning. Absurdists following Camus also devalue or outright reject free will, encouraging merely that the individual live defiantly and authenticallyin spite ofthe psychological tension of the Absurd.[65]
Camus himself passionately worked to counternihilism, as he explained in his essay "The Rebel", while he also categorically rejected the label of "existentialist" in his essay "Enigma" and in the compilationThe Lyrical and Critical Essays of Albert Camus, though he was, and still is, often broadly characterized by others as an existentialist.[66]Both existentialism and absurdism entail consideration of the practical applications of becoming conscious of the truth ofexistential nihilism: i.e., how a driven seeker of meaning should act when suddenly confronted with the seeming concealment, or downright absence, of meaning in the universe.
While absurdism can be seen as a kind of response to existentialism, it can be debated exactly how substantively the two positions differ from each other. The existentialist, after all, does not deny the reality of death. But the absurdist seems to reaffirm the way in which death ultimately nullifies our meaning-making activities, a conclusion the existentialists seem to resist through various notions of posterity or, inSartre's case, participation in a grand humanist project.[67]
The basic problem of absurdism is usually not encountered through a dispassionate philosophical inquiry but as the manifestation of anexistential crisis.[15][3][14]Existential crises are inner conflicts in which the individual wrestles with the impression that life lacksmeaning. They are accompanied by various negativeexperiences, such asstress,anxiety, despair, anddepression, which can disturb the individual's normal functioning in everyday life.[22][23][24]In this sense, the conflict underlying the absurdist perspective poses a psychological challenge to the affected. This challenge is due to the impression that the agent's vigorous daily engagement stands in incongruity with its apparent insignificance encountered through philosophical reflection.[15]Realizing this incongruity is usually not a pleasant occurrence and may lead to estrangement, alienation, and hopelessness.[68][14]The intimate relation to psychological crises is also manifested in the problem of finding the right response to this unwelcome conflict, for example, by denying it, by taking life less seriously, or by revolting against the absurd.[15]But accepting the position of absurdism may also have certain positive psychological effects. In this sense, it can help the individual achieve a certain psychological distance from unexamined dogmas and thus help them evaluate their situation from a more encompassing and objective perspective. However, it brings with it the danger of leveling all significant differences and thereby making it difficult for the individual to decide what to do or how to live their life.[8]
It has been argued that absurdism in the practical domain resemblesepistemological skepticismin the theoretical domain.[2][12]In the case of epistemology, we usually take for granted our knowledge of the world around us even though, whenmethodological doubtis applied, it turns out that this knowledge is not as unshakable as initially assumed.[69]For example, the agent may decide to trust their perception that the sun is shining but its reliability depends on the assumption that the agent is not dreaming, which they would not know even if they were dreaming. In a similar sense in the practical domain, the agent may decide to take aspirin in order to avoid a headache even though they may be unable to give a reason for why they should be concerned with their ownwellbeingat all.[2]In both cases, the agent goes ahead with a form of unsupported natural confidence and takes life largely for granted despite the fact that their power to justify is only limited to a rather small range and fails when applied to the larger context, on which the small range depends.[2][14]
It has been argued that absurdism is opposed to various fundamental principles and assumptions guidingeducation, like the importance oftruthand of fostering rationality in the students.[8]
|
https://en.wikipedia.org/wiki/Absurdism
|
Avirtual file system(VFS) orvirtual filesystem switchis an abstract layer on top of a more concretefile system. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to accesslocaland network storage devices transparently without the client application noticing the difference. It can be used to bridge the differences inWindows,classic Mac OS/macOSandUnixfilesystems, so that applications can access files on local file systems of those types without having to know what type of file system they are accessing.
A VFS specifies aninterface(or a "contract") between thekerneland a concrete file system. Therefore, it is easy to add support for new file system types to the kernel simply by fulfilling the contract. The terms of the contract might change incompatibly from release to release, which would require that concrete file system support be recompiled, and possibly modified before recompilation, to allow it to work with a new release of the operating system; or the supplier of the operating system might make only backward-compatible changes to the contract, so that concrete file system support built for a given release of the operating system would work with future versions of the operating system.
One of the first virtual file system mechanisms onUnix-likesystems was introduced bySun MicrosystemsinSunOS2.0 in 1985.[2]It allowed Unix system calls to access localUFSfile systems and remoteNFSfile systems transparently. For this reason, Unix vendors who licensed the NFS code from Sun often copied the design of Sun's VFS. Other file systems could be plugged into it also: there was an implementation of theMS-DOSFATfile system developed at Sun that plugged into the SunOS VFS, although it wasn't shipped as a product until SunOS 4.1. The SunOS implementation was the basis of the VFS mechanism inSystem V Release 4.
John Heidemanndeveloped astackingVFS under SunOS 4.0 for the experimentalFicus file system. This design provided forcode reuseamong file system types with differing but similar semantics (e.g., an encrypting file system could reuse all of the naming and storage-management code of a non-encrypting file system). Heidemann adapted this work for use in4.4BSDas a part of histhesisresearch; descendants of this code underpin the file system implementations in modern BSD derivatives includingmacOS.
Other Unix virtual file systems include the File System Switch inSystem V Release 3, the Generic File System inUltrix, and the VFS inLinux. InOS/2andMicrosoft Windows, the virtual file system mechanism is called theInstallable File System.
TheFilesystem in Userspace(FUSE) mechanism allowsuserlandcode to plug into the virtual file system mechanism in Linux,NetBSD,FreeBSD,OpenSolaris, and macOS.
In Microsoft Windows, virtual filesystems can also be implemented through userlandShell namespace extensions; however, they do not support the lowest-level file system accessapplication programming interfacesin Windows, so not all applications will be able to access file systems that are implemented as namespace extensions.KIOandGVfs/GIOprovide similar mechanisms in theKDEandGNOMEdesktop environments (respectively), with similar limitations, although they can be made to use FUSE techniques and therefore integrate smoothly into the system.
Sometimes Virtual File System refers to a file or a group of files (not necessarily inside a concrete file system) that acts as a manageable container which should provide the functionality of a concrete file system through the usage of software. Examples of such containers are CBFS Storage or asingle-file virtual file systemin an emulator likePCTaskor so-calledWinUAE, Oracle'sVirtualBox, Microsoft'sVirtual PC,VMware.
The primary benefit for this type of file system is that it is centralized and easy to remove. A single-file virtual file system may include all the basic features expected of any file system (virtual or otherwise), but access to the internal structure of these file systems is often limited to programs specifically written to make use of the single-file virtual file system (instead of implementation through a driver allowing universal access). Another major drawback is that performance is relatively low when compared to other virtual file systems. Low performance is mostly due to the cost of shuffling virtual files when data is written or deleted from the virtual file system.
Direct examples of single-file virtual file systems include emulators, such as PCTask and WinUAE, which encapsulate not only the filesystem data but also emulated disk layout. This makes it easy to treat an OS installation like any other piece of software—transferring it with removable media or over the network.
TheAmigaemulatorPCTaskemulated anIntelPC8088based machine clocked at 4.77MHz(and later an80486SX clocked at 25 MHz). Users of PCTask could create a file of large size on the Amiga filesystem, and this file would be virtually accessed from the emulator as if it were a real PC Hard Disk. The file could be formatted with the FAT16 filesystem to store normal MS-DOS or Windows files.[1][2]
TheUAEforWindows,WinUAE, allows for large single files on Windows to be treated as Amiga file systems. In WinUAE this file is called ahardfile.[3]
UAE could also treat a directory on the host filesystem (Windows,Linux,macOS,AmigaOS) as an Amiga filesystem.[4]
|
https://en.wikipedia.org/wiki/Virtual_file_system
|
Password synchronizationis a process, usually supported by software such aspassword managers, through which a user maintains a single password across multipleIT systems.[1]
Provided that all the systems enforce mutually-compatible password standards (e.g. concerning minimum and maximum password length, supported characters, etc.), the user can choose a new password at any time and deploy the same password on his or her own login accounts across multiple, linked systems.
Where different systems have mutually incompatible standards regarding what can be stored in a password field, the user may be forced to choose more than one (but still fewer than the number of systems) passwords. This may happen, for example, where the maximum password length on one system is shorter than the minimum length in another, or where one system requires use of a punctuation mark but another forbids it.
Password synchronization is a function of certainidentity managementsystems and it is considered easier to implement thanenterprise single sign-on (SSO), as there is normally no client software deployment or need for active user enrollment.[1]
Password synchronization makes it easier for IT users to recall passwords and so manage their access to multiple systems, for example on an enterprise network.[1]Since they only have to remember one or at most a few passwords, users are less likely to forget them or write them down, resulting in fewer calls to the IT Help Desk and less opportunity for coworkers, intruders or thieves to gain improper access. Through suitable security awareness, automated policy enforcement and training activities, users can be encouraged or forced to choosestronger passwordsas they have fewer to remember.
If the single, synchronized password is compromised (for example, if it is guessed, disclosed, determined bycryptanalysisfrom one of the systems, intercepted on an insecure communications path, or if the user is socially engineered into resetting it to a known value), all the systems that share that password are vulnerable to improper access. In mostsingle sign-onand password vault solutions, compromise of the primary or master password (in other words, the password used to unlock access to the individual unique passwords used on other systems) also compromises all the associated systems, so the two approaches are similar.
Depending on the software used, password synchronization may be triggered by a password change on any one of the synchronized systems (whether initiated by the user or an administrator) and/or by the user initiating the change centrally through the software, perhaps through a web interface.
Some password synchronization systems may copy password hashes from one system to another, where the hashing algorithm is the same. In general, this is not the case and access to a plaintext password is required.
Two processes which yields synchronized passwords are shown in the following animations, hosted by software vendor Hitachi ID Systems:1
|
https://en.wikipedia.org/wiki/Password_synchronization
|
TheEnglishpronounsform a relatively smallcategory of wordsinModern Englishwhose primarysemanticfunction is that of apro-formfor anoun phrase.[1]Traditional grammarsconsider them to be a distinct part of speech, while mostmodern grammarssee them as a subcategory ofnoun, contrasting withcommon and proper nouns.[2]: 22Still others see them as a subcategory ofdeterminer(see theDP hypothesis). In this article, they are treated as a subtype of the noun category.
They clearly includepersonal pronouns,relative pronouns,interrogative pronouns, andreciprocal pronouns.[3]Other types that are included by some grammars but excluded by others aredemonstrative pronounsandindefinite pronouns. Other members are disputed (see below).
Pronouns in formal modern English.
The full set of pronouns (i.e.personal, relative, interrogative and reciprocal pronouns), along with dummiesitandthere, of which the status as pronouns is disputed. Nonstandard, informal, and archaic forms are initalics.
*Whomandwhichcan be the object of a fronted preposition, but not ofwhoor anomitted (Ø)pronoun:The chair on which she satorThe chair (that) she sat on, but not*The chair on that she sat.
†Except infree or fused relative constructions, in which casewhat,whateverorwhicheveris used for a thing andwhoeverorwhomeveris used for a person:What he did was clearly impossible,Whoever you married is welcome here(see below).
Pronoun is a category of words. Apro-formis not. It is a meaning relation in which a phrase "stands in" for (expresses the same content as) another where themeaningis recoverable from the context.[4]In English, pronouns mostly function as pro-forms, but there are pronouns that are not pro-forms and pro-forms that are not pronouns.[2]: 239Pronouns can be pro-forms for non-noun phrases. For example, inI fixed the bike,whichwas quite a challenge, the relative pronounwhichdoesn't stand in for "the bike". Instead, it stands in for the entire proposition "I fixed the bike", aclause, or arguably "fixing the bike", a verb phrase.
Most pronouns aredeictic:[2]: 68they have no inherentdenotation, and their meaning is always contextual. For example, the meaning ofmedepends entirely on who says it, just as the meaning ofyoudepends on who is being addressed. Pronouns are not the only deictic words though. For examplenowis deictic, but it's not a pronoun.[5]Also, dummy pronouns and interrogative pronouns are not deictic. In contrast, most noun phrases headed by common or proper nouns are not deictic. For example,a booktypically has the same denotation regardless of the situation in which it is said.
English pronouns have all of the functions of other noun phrases:[2]: ch. 5
On top of this, pronouns can appear ininterrogative tags(e.g.,that's the one,isn't it?).[2]: 238These tags are formed with an auxiliary verb and a pronoun. Other nouns cannot appear in this construction. This provides justification for categorizing dummythereas a pronoun.[2]: 256
Subject pronouns are typically in nominative form (e.g.,Sheworks here.), though independent genitives are also possible (e.g.,Hersis better.). In non-finite clauses, however, there is more variety, an example ofform-meaning mismatch. Inpresent participialclauses, the nominative, accusative, and dependent genitive are all possible:[2]: 460, 467
Ininfinitival clauses, accusative case pronouns function as the subject:
Object pronouns are typically in accusative form (e.g.,I sawhim.) but may also be reflexive (e.g.,She sawherself) or independent genitive (e.g.,We gotours.).
The pronoun object of a preposition is typically in the accusative form but may also be reflexive (e.g.,She sent it toherself) or independent genitive (e.g.,I hadn't heard oftheirs.). Withbut,than, andasin a very formal register, nominative is also possible (e.g.,You're taller thanme/I.)[2]: 461
A pronoun in predicative complement position is typically in the accusative form (e.g.,It'sme) but may also be reflexive (e.g.,She isn'therselftoday) or independent genitive (e.g.,It'stheirs.).
Only genitive pronouns may function as determinatives.
The most common form for adjuncts is the reflexive (e.g.,I did itmyself). Independent genitives and accusative are also possible (e.g.,Only one matters,mine/me.).
Like proper nouns, but unlike common nouns, pronouns usually resistdependents.[2]: 425They are not alwaysungrammatical, but they are quite limited in their use:
*theyou[b]
*youyou want to be
*new them
Personal pronouns are those that participate in the grammatical and semantic systems ofperson(1st, 2nd, & 3rd person).[2]: 1463They are called "personal" pronouns for this reason, and not because they refer to persons, though some do. They typically formdefiniteNPs.
The personal pronouns of modern standard English are presented in the table above. They areI, you, she, he, it, we, andthey, and their inflected forms.
The second-personyouforms are used with both singular and plural reference. In the Southern United States,y'all(fromyou all) is used as a plural form, and various other phrases such asyou guysare used in other places. An archaic set of second-person pronouns used for singular reference isthou, thee, thyself, thy, thine,which are still used in religious services and can be seen in older works, such as Shakespeare's—in such texts,yeand theyouset of pronouns are used for plural reference, or with singular reference as a formalV-form.[6]Youcan also be used as anindefinite pronoun, referring to a person in general (seegenericyou), compared to the more formal alternative,one(reflexiveoneself, possessiveone's).
The third-person singular forms are differentiated according to thegenderof the referent. For example,sheis used to refer to a woman, sometimes a female animal, and sometimes an object to which feminine characteristics are attributed, such as a ship, car or country. A man, and sometimes a male animal, is referred to usinghe. In other casesitcan be used. (SeeGender in English.)
The third-person formtheyis used with both plural and singularreferents. Historically,singulartheywas restricted toquantificationalconstructions such asEach employee should clean their deskand referential cases where the referent's gender was unknown.[7]However, it is increasingly used when the referent's gender is irrelevant or when the referent presents as neither man nor woman.[8]
The dependent genitive pronouns, such asmy, are used as determinatives together with nouns, as inmyold man,some ofhisfriends. The independent genitive forms likemineare used as full noun phrases (e.g.,mine is bigger than yours;this one is mine). Note also the constructiona friend of mine(meaning "someone who is my friend"). SeeEnglish possessivefor more details.
Theinterrogative pronounsarewho,whom,whose, whichandwhat(also with the suffix-ever). They are chiefly used in interrogativeclausesfor thespeech actof askingquestions.[2]: 61Whathas impersonal gender, whilewho,whomandwhosehave personal gender;[2]: 904they are used to refer to persons.Whomis the accusative form ofwho(though in most contexts this is replaced bywho), whilewhoseis the genitive form.[2]: 464For more information seewho.
All the interrogative pronouns can also be used as relative pronouns, thoughwhatis quite limited in its use;[9]see below for more details.
The mainrelative pronounsin English arewho(with its derived formswhomandwhose), andwhich.[10]
The relative pronounwhichrefers to things rather than persons, as inthe shirt, which used to be red, is faded. For persons,whois used (the man who saw me was tall). Theoblique caseform ofwhoiswhom, as inthe man whom I saw was tall, although in informalregisterswhois commonly used in place ofwhom.
The possessive form ofwhoiswhose(for example,the man whose car is missing); however the use ofwhoseis not restricted to persons (one can sayan idea whose time has come). This can be used without a head noun, as inThis is Jen, a friend ofwhoseyou've already met.
The wordthatis disputed. Traditionally, it is considered a pronoun, but modern approaches disagree. See below.
The wordwhatcan be used to form afree relative clause– one that has no antecedent and that serves as a complete noun phrase in itself, as inI like what he likes. The wordswhateverandwhichevercan be used similarly, in the role of either pronouns (whatever he likes) or determiners (whatever book he likes). When referring to persons,who(ever)(andwhom(ever)) can be used in a similar way (but not as determiners).
A generic pronoun is one with the interpretation of "a person in general". These pronouns cannot have adefiniteorspecificreferent, and they "cannot be used as ananaphorto another NP."[2]: 427The generic pronouns areone(e.g.,onecan seeoneselfin the mirror) andyou(e.g.,In Tokugawa Japan,youcouldn't leave the country), withonebeing more formal thanyou.[2]: 427
The Englishreciprocal pronounsareeach otherandone another. Although they are written with a space, they're best thought of as single words. No consistent distinction in meaning or use can be found between them. Like the reflexive pronouns, their use is limited to contexts where anantecedentprecedes it. In the case of the reciprocals, they need to appear in the same clause as the antecedent.[9]
Today, the Englishdeterminersare generally seen as a separate category of words, but they were traditionally viewed asadjectiveswhen they came before a noun (e.g.,somepeople,nobooks,eachbook) and as pronouns when they werepro-forms(e.g.,I'll havesome;I hadnone,eachof the books).[2]: 22
As pronouns,whatandwhichhave non-personal gender.[2]: 398This means they cannot be used to refer to persons;whatis thatcannot mean "who is that". But there are also determiners with the same forms. The determiners are not gendered, so they can refer to persons or non-persons (e.g.,whatgenius said that).
Relativewhichis usually a pronoun, but it can be a determiner in cases likeIt may rain, inwhichcase we won't go.Whatis almost never a relative word, but when it is, it is a pronoun (e.g.,I didn't seewhatyou took.)
Thedemonstrative pronounsthis(pluralthese), andthat(pluralthose), are a sub-type of determiner in English.[2]: 373Traditionally, they are viewed as pronouns in cases such asthese are good;I like that.
The determiners starting withsome-,any,no, andevery- and ending with-one,-body, -thing,-place(e.g.,someone,nothing) are often calledindefinite pronouns, though others consider them to be compound determiners.[2]: 423
The generic pronounsoneand thegeneric use ofyouare sometimes called indefinite. These are uncontroversial pronouns.[11]Note, however, that English has three words that share the spelling and pronunciation ofone.[2]: 426–427
The wordthereis adummy pronounin some clauses, chieflyexistential(There is no god) andpresentationalconstructions (There appeared a cat on the window sill). The dummy subject takes thenumber(singular or plural) of the logical subject (complement), hence it takes a plural verb if the complement is plural. In informal English, however, thecontractionthere'sis often used for both singular and plural.[12]
Therecan undergoinversion,Is there a test today?andNever has there been a man such as this.It can also appear without a corresponding logical subject, in short sentences andquestion tags:There wasn't a discussion, was there?
The wordtherein such sentences has sometimes been analyzed as anadverb, or as a dummypredicate, rather than as a pronoun.[13]However, its identification as a pronoun is most consistent with its behavior in inverted sentences and question tags as described above.
Because the wordtherecan also be adeicticadverb (meaning "at that place"), a sentence likeThere is a rivercould have either of two meanings: "a river exists" (withthereas a pronoun), and "a river is in that place" (withthereas an adverb). In speech, the adverbialtherewould be givenstress, while the pronoun would not – in fact, the pronoun is often pronounced as aweak form,/ðə(r)/.
These words are sometimes classified as nouns (e.g.,Tomorrowshould be a nice day), and sometimes asadverbs(I'll see youtomorrow).[14]But they are alternatively classified as pronouns in both of these examples.[2]: 429In fact, these words have most of the characteristics of pronouns (see above). In particular, they are pro-forms, and they resist most dependents (e.g.,*a good today).
Traditional grammars classifythatas a relative pronoun.[15]Most modern grammars disagree, calling it asubordinatoror acomplementizer.[2]: 63
Relativethatis normally found only inrestrictive relative clauses(unlikewhichandwho, which can be used in both restrictive and unrestrictive clauses). It can refer to either persons or things, and cannot follow a preposition. For example, one can saythe song that[orwhich]I listened to yesterday, butthe song to which[notto that]I listened yesterday. Relativethatis usually pronounced with a reduced vowel (schwa), and hence differently from the demonstrativethat(seeWeak and strong forms in English). Ifthatis not the subject of the relative clause (in the traditional view), it can be omitted (the song I listened to yesterday).
There is some confusion about the difference between a pronoun and apro-form. For example, some sources make claims such as the following:
We can useotheras a pronoun. As a pronoun,otherhas a plural form,others:
Butotheris just a common noun here. Unlike pronouns, it readily takes a determiner (manyothers) or arelative clausemodifier(othersthat we know).
Hwā("who") andhwæt("what") follow natural gender, not grammatical gender: as in Modern English,hwāis used with people,hwætwith things. However, that distinction only matters in thenominativeandaccusative cases, as they are identical in other cases:
Hwelċ("which" or "what kind of") is inflected like an adjective. Same withhwæðer, which also means "which" but is only used between two alternatives:
The first- and second-person pronouns are the same for all genders. They also have specialdual forms, which are only used for groups of two things, as in "we both" and "you two." The dual forms are common, but the ordinary plural forms can always be used instead when the meaning is clear.
Many of the forms above bear a strong resemblance to the Modern English words they eventually became. For instance, in the genitive case,ēowerbecame "your,"ūrebecame "our," andmīnbecame "my." However, the plural third-person personal pronouns were all replaced withOld Norseforms during theMiddle Englishperiod, yielding "they," "them," and "their."
Middle Englishpersonal pronounswere mostly developed fromthose of Old English, with the exception of the third-person plural, a borrowing fromOld Norse(the original Old English form clashed with the third person singular and was eventually dropped). Also, the nominative form of the feminine third-person singular was replaced by a form of thedemonstrativethat developed intosche(modernshe), but the alternativeheyrremained in some areas for a long time.
As with nouns, there was some inflectional simplification (the distinct Old Englishdualforms were lost), but pronouns, unlike nouns, retained distinct nominative and accusative forms. Third-person pronouns also retained a distinction between accusative and dative forms, but that was gradually lost: the masculinehinewas replaced byhimsouth of the Thames by the early 14th century, and the neuter dativehimwas ousted byitin most dialects by the 15th.[17]
The following table shows some of the various Middle English pronouns. Many other variations are noted in Middle English sources because of differences in spellings and pronunciations at different times and in different dialects.[18]
|
https://en.wikipedia.org/wiki/English_pronouns
|
TheF16C[1](previously/informally known asCVT16) instruction set is anx86instruction set architectureextension which provides support for converting betweenhalf-precisionand standard IEEEsingle-precision floating-point formats.
The CVT16 instruction set, announced byAMDon May 1, 2009,[2]is an extension to the 128-bitSSEcore instructions in thex86andAMD64instruction sets.
CVT16 is a revision of part of theSSE5instruction set proposal announced on August 30, 2007, which is supplemented by theXOPandFMA4instruction sets. This revision makes the binary coding of the proposed new instructions more compatible withIntel'sAVXinstruction extensions, while the functionality of the instructions is unchanged.
In recent documents, the name F16C is formally used in bothIntelandAMDx86-64architecture specifications.
There are variants that convert four floating-point values in anXMM registeror 8 floating-point values in aYMM register.
The instructions are abbreviations for "vector convert packed half to packed single" and vice versa:
The 8-bit immediate argument toVCVTPS2PHselects theroundingmode. Values 0–4 select nearest, down, up, truncate, and the mode set inMXCSR.RC.
Support for these instructions is indicated by bit 29 of ECX afterCPUID with EAX=1.
|
https://en.wikipedia.org/wiki/F16C
|
Anordinary(fromLatinordinarius) is an officer of a church or civic authority who by reason of office hasordinary powerto execute laws.
Such officers are found in hierarchically organised churches ofWestern Christianitywhich have anecclesiastical legal system.[1]For example, diocesan bishops are ordinaries in theCatholic Church[1]and theChurch of England.[2]InEastern Christianity, a corresponding officer is called ahierarch[3](fromGreekἱεράρχηςhierarkhēs"president of sacred rites, high-priest"[4]which comes in turn from τὰ ἱεράta hiera, "the sacred rites" and ἄρχωarkhō, "I rule").[5]
Incanon law, the power to govern the church is divided into the power to make laws (legislative), enforce the laws (executive), and to judge based on the law (judicial).[6]An official exercises power to govern either because he holds an office to which the law grants governing power or because someone with governing power has delegated it to him. Ordinary power is the former, while the latter is delegated power.[7]The office with ordinary power could possess the governing power itself (proper ordinary power) or instead it could have the ordinary power of agency, the inherent power to exercise someone else's power (vicariousordinary power).[8]
The law vesting ordinary power could either be ecclesiastical law, i.e. the positive enactments that the church has established for itself, or divine law, i.e. the laws which were given to the Church by God.[9]As an example of divinely instituted ordinaries, whenJesusestablished the Church, he also established theepiscopateand theprimacy of Peter, endowing the offices with power to govern the Church.[10]Thus, in the Catholic Church, the office of successor of Simon Peter and the office of diocesan bishop possess their ordinary power even in the absence of positive enactments from the Church.
Many officers possess ordinary power but, due to their lack of ordinary executive power, are not called ordinaries. The best example of this phenomenon is the office ofjudicial vicar, a.k.a.officialis. The judicial vicar only has authority through his office to exercise the diocesan bishop's power to judge cases.[11]Though the vicar has vicarious ordinary judicial power, he is not an ordinary because he lacks ordinary executive power. Avicar general, however, has authority through his office to exercise the diocesan bishop's executive power.[12]He is therefore an ordinary because of this vicarious ordinary executive power.
Local ordinaries exercise ordinary power and are ordinaries inparticular churches.[13]The followingclericsare local ordinaries:
Also classified as local ordinaries, although they do not head a particular church or equivalent community are:
Major superiors ofreligious institutes(includingabbots) and ofsocieties of apostolic lifeare ordinaries of their respective memberships, but not local ordinaries.[20]
In theEastern Orthodox Church, a hierarch (ruling bishop) holds uncontested authority within the boundaries of his own diocese; no other bishop may perform anysacerdotalfunctions without the ruling bishop's express invitation. The violation of this rule is calledeispēdēsis(Greek: εἰσπήδησις, "trespassing", literally "jumping in"), and is uncanonical. Ultimately, all bishops in the Church are equal, regardless of any title they may enjoy (Patriarch,Metropolitan,Archbishop, etc.). The role of the bishop in the Orthodox Church is both hierarchical and sacramental.[21]
This pattern of governance dates back to the earliest centuries of Christianity, as witnessed by the writings ofIgnatius of Antioch(c.100 AD):
The bishop in each Church presides in the place of God.... Let no one do any of the things which concern the Church without the bishop.... Wherever the bishop appears, there let the people be, just as wherever Jesus Christ is, there is theCatholic Church.
And it is the bishop's primary and distinctive task to celebrate theEucharist, "the medicine of immortality."[21][22]SaintCyprian of Carthage(258 AD) wrote:
The episcopate is a single whole, in which each bishop enjoys full possession. So is the Church a single whole, though it spreads far and wide into a multitude of churches and its fertility increases.[23]
Bishop Kallistos (Ware)wrote:
There are many churches, but only One Church; manyepiscopibut only one episcopate."[24]
InEastern Orthodox Christianity, the church is not seen as a monolithic, centralized institution, but rather as existing in its fullness in each local body. The church is defined Eucharistically:
in each particular community gathered around its bishop; and at every local celebration of the Eucharist it is thewholeChrist who is present, not just a part of Him. Therefore, each local community, as it celebrates the Eucharist ... is the church in its fullness."[21]
An Eastern Orthodox bishop's authority comes from his election andconsecration. He is, however, subject to theSacred Canonsof the Eastern Orthodox Church, and answers to theSynod of Bishopsto which he belongs. In case an Orthodox bishop is overruled by his local synod, he retains the right ofappeal(Greek: Ἔκκλητον,Ékklēton) to his ecclesiastical superior (e.g. a Patriarch) and his synod.
|
https://en.wikipedia.org/wiki/Ordinary_(officer)
|
Acontrol systemmanages, commands, directs, or regulates the behavior of other devices or systems usingcontrol loops. It can range from a single home heating controller using athermostatcontrolling a domestic boiler to largeindustrial control systemswhich are used for controllingprocessesor machines. The control systems are designed viacontrol engineeringprocess.
For continuously modulated control, afeedback controlleris used to automatically control a process or operation. The control system compares the value or status of theprocess variable(PV) being controlled with the desired value orsetpoint(SP), and applies the difference as a control signal to bring the process variable output of theplantto the same value as the setpoint.
Forsequentialandcombinational logic,software logic, such as in aprogrammable logic controller, is used.[clarification needed]
Fundamentally, there are two types of control loop:open-loop control(feedforward), andclosed-loop control(feedback).
The definition of a closed loop control system according to theBritish Standards Institutionis "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."[2]
Aclosed-loop controlleror feedback controller is acontrol loopwhich incorporatesfeedback, in contrast to anopen-loop controllerornon-feedback controller.
A closed-loop controller uses feedback to controlstatesoroutputsof adynamical system. Its name comes from the information path in the system: process inputs (e.g.,voltageapplied to anelectric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured withsensorsand processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.[4]
In the case of linearfeedbacksystems, acontrol loopincludingsensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at asetpoint(SP). An everyday example is thecruise controlon a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. ThePID algorithmin the controller restores the actual speed to the desired speed in an optimum way, with minimal delay orovershoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent.Open-loop control systemsdo not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termedfeedforwardand serves to further improve reference tracking performance.
A common closed-loop controller architecture is thePID controller.
Logic control systems for industrial and commercial machinery were historically implemented by interconnected electricalrelaysandcam timersusingladder logic. Today, most such systems are constructed withmicrocontrollersor more specializedprogrammable logic controllers(PLCs). The notation of ladder logic is still in use as a programming method for PLCs.[6]
Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use ofactuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine.
PLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) orstatement lists.[7]
On–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domesticthermostatcan be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective.
Fuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true.
The rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: "If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace."
Measurements from the real world (such as the temperature of a furnace) arefuzzifiedand logic is calculated arithmetic, as opposed toBoolean logic, and the outputs arede-fuzzifiedto control equipment.
When a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive.[citation needed]
Fuzzy electronicsis an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used indigital electronics.
The range of control system implementation is fromcompact controllersoften with dedicated software for a particular machine or device, todistributed control systemsfor industrial process control for a largephysical plant.
Logic systems and feedback controllers are usually implemented withprogrammable logic controllers. The Broadly Reconfigurable and Expandable Automation Device (BREAD) is a recent framework that provides manyopen-source hardwaredevices which can be connected to create more complexdata acquisitionand control systems.[8]
|
https://en.wikipedia.org/wiki/Control_system
|
Gödel, Escher, Bach: an Eternal Golden Braid(abbreviated asGEB) is a 1979 nonfiction book by American cognitive scientistDouglas Hofstadter.
By exploring common themes in the lives and works of logicianKurt Gödel, artistM. C. Escher, and composerJohann Sebastian Bach, the book expounds concepts fundamental tomathematics,symmetry, andintelligence. Through short stories, illustrations, and analysis, the book discusses how systems can acquire meaningful context despite being made of "meaningless" elements. It also discussesself-referenceand formal rules,isomorphism, what it means to communicate, how knowledge can be represented and stored, the methods and limitations of symbolic representation, and even the fundamental notion of "meaning" itself.
In response to confusion over the book's theme, Hofstadter emphasized thatGödel, Escher, Bachis not about the relationships ofmathematics, art, andmusic, but rather about howcognitionemergesfrom hidden neurological mechanisms. One point in the book presents an analogy about how individualneuronsin thebraincoordinate to create a unified sense of a coherent mind by comparing it to the social organization displayed in acolony of ants.[1][2]
Gödel, Escher, Bachwon thePulitzer Prize for General Nonfiction[3]and theNational Book Awardfor Science Hardcover.[4][a]
Gödel, Escher, Bachtakes the form of interweaving narratives. The main chapters alternate with dialogues between imaginary characters, usuallyAchilles and the tortoise, first used byZeno of Eleaand later byLewis Carrollin "What the Tortoise Said to Achilles". These origins are related in the first two dialogues, and later ones introduce new characters such as the Crab. These narratives frequently dip intoself-referenceandmetafiction.
Word playalso features prominently in the work. Puns are occasionally used to connect ideas, such as the "Magnificrab, Indeed" with Bach'sMagnificat in D; "SHRDLU, Toy of Man's Designing" with Bach's "Jesu, Joy of Man's Desiring"; and "Typographical Number Theory", or "TNT", which inevitably reacts explosively when it attempts to make statements about itself. One dialogue contains a story about a genie (from the Arabic "Djinn") and various "tonics" (of both theliquidandmusicalvarieties), which is titled "Djinn and Tonic". Sometimes word play has no significant connection, such as the dialogue "AMuOffering", which has no close affinity to Bach'sThe Musical Offering.
One dialogue in the book is written in the form of acrab canon, in which every line before the midpoint corresponds to an identical line past the midpoint. The conversation still makes sense due to uses of common phrases that can be used as either greetings or farewells ("Good day") and the positioning of lines that double as an answer to a question in the next line. Another is a sloth canon, where one character repeats the lines of another, but slower and negated.
The book contains many instances ofrecursionandself-reference, where objects and ideas speak about or refer back to themselves. One isQuining, a term Hofstadter invented in homage toWillard Van Orman Quine, referring to programs that produce their ownsource code. Another is the presence of a fictional author in the index,Egbert B. Gebstadter, a man with initials E, G, and B and a surname that partially matches Hofstadter. A phonograph dubbed "Record Player X" destroys itself by playing a record titledI Cannot Be Played on Record Player X(an analogy toGödel's incompleteness theorems), an examination ofcanonform inmusic, and a discussion of Escher'slithograph of two hands drawing each other.
To describe such self-referencing objects, Hofstadter coins the term "strange loop", a concept he examines in more depth in his follow-up bookI Am a Strange Loop. To escape many of the logical contradictions brought about by these self-referencing objects, Hofstadter discussesZenkoans. He attempts to show readers how to perceive reality outside their own experience and embrace such paradoxical questions by rejecting the premise, a strategy also called "unasking".
Elements ofcomputer sciencesuch ascall stacksare also discussed inGödel, Escher, Bach, as one dialogue describes the adventures of Achilles and the Tortoise as they make use of "pushing potion" and "popping tonic" involving entering and leaving different layers of reality. The same dialogue has a genie with a lamp containing another genie with another lamp and so on. Subsequent sections discuss the basic tenets of logic, self-referring statements, ("typeless") systems, and even programming. Hofstadter further createsBlooP and FlooP, two simpleprogramming languages, to illustrate his point.
The book is filled with puzzles, including Hofstadter'sMU puzzle, which contrasts reasoning within a defined logical system with reasoning about that system. Another example can be found in the chapter titledContracrostipunctus, which combines the wordsacrosticandcontrapunctus(counterpoint). In this dialogue between Achilles and the Tortoise, the author hints that there is a contrapunctal acrostic in the chapter that refers both to the author (Hofstadter) and Bach. This can be spelled out by taking the first word of each paragraph, to reveal "Hofstadter's Contracrostipunctus Acrostically Backwards Spells J. S. Bach". The second acrostic is found by taking the first letters of the words of the first, and reading them backwards to get "J S Bach", as the acrostic sentence self-referentially states.
Gödel, Escher, Bachwon thePulitzer Prize for General Nonfictionand theNational Book Awardfor Science Hardcover.
Martin Gardner's July 1979 column inScientific Americanstated, "Every few decades, an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event."[5]
For Summer 2007, theMassachusetts Institute of Technologycreated an online course for high school students built around the book.[6]
In its February 19, 2010, investigative summary on the2001 anthrax attacks, theFederal Bureau of Investigationsuggested thatBruce Edwards Ivinswas inspired by the book to hide secret codes based uponnucleotide sequencesin theanthrax-laced letters he allegedly sent in September and October 2001,[7]using bold letters, as suggested on page 404 of the book.[8][9]It was also suggested that he attempted to hide the book from investigators by throwing it in the trash.[10]
In 2019, British mathematicianMarcus du Sautoycurated a series of events at London'sBarbican Centreto celebrate the book's fortieth anniversary.[11]
Hofstadter has expressed some frustration with howGödel, Escher, Bachwas received. He felt that readers did not fully grasp thatstrange loopswere supposed to be the central theme of the book, and attributed this confusion to the length of the book and the breadth of the topics covered.[12][13]
To remedy this issue, Hofstadter publishedI Am a Strange Loopin 2007, which had a more focused discussion of the idea.[13]
Hofstadter claims the idea of translating his book "never crossed [his] mind" when he was writing it—but when his publisher brought it up, he was "very excited about seeing [the] book in other languages, especially… French." He knew, however, that "there were a million issues to consider" when translating,[14]since the book relies not only on word-play, but on "structural puns" as well—writing where the form and content of the work mirror each other (such as the "Crab canon" dialogue, which reads almost exactly the same forwards as backwards).
Hofstadter gives an example of translation trouble in the paragraph "Mr. Tortoise, Meet Madame Tortue", saying translators "instantly ran headlong into the conflict between the feminine gender of the French nountortueand the masculinity of my character, the Tortoise."[14]Hofstadter agreed to the translators' suggestions of naming the French characterMadame Tortue, and the Italian versionSignorina Tartaruga.[15]Because of other troubles translators might have retaining meaning, Hofstadter "painstakingly went through every sentence ofGödel, Escher, Bach, annotating a copy for translators into any language that might be targeted."[14]
Translation also gave Hofstadter a way to add new meaning and puns. For instance, inChinese, the subtitle is not a translation ofan Eternal Golden Braid, but a seemingly unrelated phraseJí Yì Bì(集异璧, literally "collection of exotic jades"), which ishomophonictoGEBin Chinese. Some material regarding this interplay is in Hofstadter's later book,Le Ton beau de Marot, which is mainly about translation.
|
https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
|
Grundy's gameis a two-player mathematical game of strategy. The starting configuration is a single heap of objects, and the two players take turn splitting a single heap into two heaps of different sizes. The game ends when only heaps of size two and smaller remain, none of which can be split unequally. The game is usually played as anormal playgame, which means that the last person who can make an allowed move wins.
A normal play game starting with a single heap of 8 is a win for the first player provided they start by splitting the heap into heaps of 7 and 1:
Player 2 now has three choices: splitting the 7-heap into 6 + 1, 5 + 2, or 4 + 3. In each of these cases, player 1 can ensure that on the next move he hands back to his opponent a heap of size 4 plus heaps of size 2 and smaller:
Now player 2 has to split the 4-heap into 3 + 1, and player 1 subsequently splits the 3-heap into 2 + 1:
The game can be analysed using theSprague–Grundy theorem. This requires the heap sizes in the game to be mapped onto equivalentnim heap sizes. This mapping is captured in theOn-Line Encyclopedia of Integer SequencesasOEIS:A002188:
Using this mapping, the strategy for playing the gameNimcan also be used for Grundy's game. Whether the sequence of nim-values of Grundy's game ever becomes periodic is an unsolved problem.Elwyn Berlekamp,John Horton ConwayandRichard Guyhave conjectured[1]that the sequence does become periodic eventually, but despite the calculation of the first 235values byAchim Flammenkamp, the question has not been resolved.
|
https://en.wikipedia.org/wiki/Grundy%27s_game
|
Adecision matrixis a list of values in rows and columns that allows an analyst to systematically identify, analyze, and rate the performance of relationships between sets of values and information. Elements of a decision matrix show decisions based on certain decision criteria. The matrix is useful for looking at large masses of decision factors and assessing each factor's relative significance by weighting them by importance.[1]
The termdecision matrixis used to describe amultiple-criteria decision analysis(MCDA) problem. An MCDA problem, where there areMalternative options and each needs to be assessed onNcriteria, can be described by the decision matrix which hasNrows andMcolumns, orM×Nelements, as shown in the following table. Each element, such asXij, is either a single numerical value or a single grade, representing the performance of alternativeion criterionj. For example, if alternativeiis "cari", criterionjis "engine quality" assessed by five grades {Exceptional, Good, Average, Below Average, Poor}, and "Cari" is assessed to be "Good" on "engine quality", thenXij= "Good". These assessments may be replaced by scores, from 1 to 5. Sums of scores may then be compared and ranked, to show the winning proposal.[2]
Similar to a decision matrix, abelief decision matrixis used to describe a multiple criteria decision analysis (MCDA) problem in theEvidential Reasoning Approach. Instead of being a single numerical value or a single grade as in a decision matrix, each element in a belief decision matrix is abelief distribution.
For example, suppose Alternative i is "Car i", Criterion j is "Engine Quality" assessed by five grades {Excellent, Good, Average, Below Average, Poor}, and "Car i" is assessed to be “Excellent” on "Engine Quality" with a high degree of belief (e.g. 0.6) due to its low fuel consumption, low vibration and high responsiveness. At the same time, the quality is also assessed to be only “Good” with a lower degree of belief (e.g. 0.4 or less) because its quietness and starting can still be improved. If this is the case, then we have Xij={ (Excellent, 0.6), (Good, 0.4)}, or Xij={ (Excellent, 0.6), (Good, 0.4), (Average, 0), (Below Average, 0), (Poor, 0)}.
A conventional decision matrix is a special case of belief decision matrix when only one belief degree in a belief structure is 1 and the others are 0.
|
https://en.wikipedia.org/wiki/Decision_matrix
|
Incryptography,SHA-1(Secure Hash Algorithm 1) is ahash functionwhich takes an input and produces a 160-bit(20-byte) hash value known as amessage digest– typically rendered as 40hexadecimaldigits. It was designed by the United StatesNational Security Agency, and is a U.S.Federal Information Processing Standard.[3]The algorithm has been cryptographically broken[4][5][6][7][8][9][10]but is still widely used.
Since 2005, SHA-1 has not been considered secure against well-funded opponents;[11]as of 2010 many organizations have recommended its replacement.[12][10][13]NISTformally deprecated use of SHA-1 in 2011 and disallowed its use for digital signatures in 2013, and declared that it should be phased out by 2030.[14]As of 2020[update],chosen-prefix attacksagainst SHA-1 are practical.[6][8]As such, it is recommended to remove SHA-1 from products as soon as possible and instead useSHA-2orSHA-3. Replacing SHA-1 is urgent where it is used fordigital signatures.
All majorweb browservendors ceased acceptance of SHA-1SSL certificatesin 2017.[15][9][4]In February 2017,CWI AmsterdamandGoogleannounced they had performed acollision attackagainst SHA-1, publishing two dissimilar PDF files which produced the same SHA-1 hash.[16][2]However, SHA-1 is still secure forHMAC.[17]
Microsofthas discontinued SHA-1 code signing support forWindows Updateon August 3, 2020,[18]which also effectively ended the update servers for versions ofWindowsthat have not been updated to SHA-2, such asWindows 2000up toVista, as well asWindows Serverversions fromWindows 2000 ServertoServer 2003.
SHA-1 produces amessage digestbased on principles similar to those used byRonald L. RivestofMITin the design of theMD2,MD4andMD5message digest algorithms, but generates a larger hash value (160 bits vs. 128 bits).
SHA-1 was developed as part of the U.S. Government'sCapstone project.[19]The original specification of the algorithm was published in 1993 under the titleSecure Hash Standard,FIPSPUB 180, by U.S. government standards agencyNIST(National Institute of Standards and Technology).[20][21]This version is now often namedSHA-0. It was withdrawn by theNSAshortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designatedSHA-1. SHA-1 differs from SHA-0 only by a single bitwise rotation in the message schedule of itscompression function. According to the NSA, this was done to correct a flaw in the original algorithm which reduced its cryptographic security, but they did not provide any further explanation.[22][23]Publicly available techniques did indeed demonstrate a compromise of SHA-0, in 2004, before SHA-1 in 2017 (see§Attacks).
SHA-1 forms part of several widely used security applications and protocols, includingTLSandSSL,PGP,SSH,S/MIME, andIPsec. Those applications can also useMD5; both MD5 and SHA-1 are descended fromMD4.
SHA-1 and SHA-2 are the hash algorithms required by law for use in certainU.S. governmentapplications, including use within other cryptographic algorithms and protocols, for the protection of sensitive unclassified information. FIPS PUB 180-1 also encouraged adoption and use of SHA-1 by private and commercial organizations. SHA-1 is being retired from most government uses; the U.S. National Institute of Standards and Technology said, "Federal agencies should stop using SHA-1 for...applications that require collision resistance as soon as practical, and must use theSHA-2family of hash functions for these applications after 2010",[24]though that was later relaxed to allow SHA-1 to be used for verifying old digital signatures and time stamps.[24]
A prime motivation for the publication of theSecure Hash Algorithmwas theDigital Signature Standard, in which it is incorporated.
The SHA hash functions have been used for the basis of theSHACALblock ciphers.
Revision controlsystems such asGit,Mercurial, andMonotoneuse SHA-1, not for security, but to identify revisions and to ensure that the data has not changed due to accidental corruption.Linus Torvaldssaid about Git in 2007:
However Git does not require thesecond preimage resistanceof SHA-1 as a security feature, since it will always prefer to keep the earliest version of an object in case of collision, preventing an attacker from surreptitiously overwriting files.[26]The known attacks (as of 2020) also do not break second preimage resistance.[27]
For a hash function for whichLis the number of bits in the message digest, finding a message that corresponds to a given message digest can always be done using a brute force search in approximately 2Levaluations. This is called apreimage attackand may or may not be practical depending onLand the particular computing environment. However, acollision, consisting of finding two different messages that produce the same message digest, requires on average only about1.2 × 2L/2evaluations using abirthday attack. Thus thestrengthof a hash function is usually compared to a symmetric cipher of half the message digest length. SHA-1, which has a 160-bit message digest, was originally thought to have 80-bit strength.
Some of the applications that use cryptographic hashes, like password storage, are only minimally affected by a collision attack. Constructing a password that works for a given account requires apreimage attack, as well as access to the hash of the original password, which may or may not be trivial. Reversing password encryption (e.g. to obtain a password to try against a user's account elsewhere) is not made possible by the attacks. However, even a secure password hash can't prevent brute-force attacks onweak passwords.SeePassword cracking.
In the case of document signing, an attacker could not simply fake a signature from an existing document: The attacker would have to produce a pair of documents, one innocuous and one damaging, and get the private key holder to sign the innocuous document. There are practical circumstances in which this is possible; until the end of 2008, it was possible to create forgedSSLcertificates using anMD5collision.[28]
Due to the block and iterative structure of the algorithms and the absence of additional final steps, all SHA functions (except SHA-3)[29]are vulnerable tolength-extensionand partial-message collision attacks.[30]These attacks allow an attacker to forge a message signed only by a keyed hash –SHA(key||message), but notSHA(message||key)– by extending the message and recalculating the hash without knowing the key. A simple improvement to prevent these attacks is to hash twice:SHAd(message) = SHA(SHA(0b||message))(the length of 0b, zero block, is equal to the block size of the hash function).
AtCRYPTO98, two French researchers,Florent ChabaudandAntoine Joux, presented an attack on SHA-0:collisionscan be found with complexity 261, fewer than the 280for an ideal hash function of the same size.[31]
In 2004,Bihamand Chen found near-collisions for SHA-0 – two messages that hash to nearly the same value; in this case, 142 out of the 160 bits are equal. They also found full collisions of SHA-0 reduced to 62 out of its 80 rounds.[32]
Subsequently, on 12 August 2004, a collision for the full SHA-0 algorithm was announced by Joux, Carribault, Lemuet, and Jalby. This was done by using a generalization of the Chabaud and Joux attack. Finding the collision had complexity 251and took about 80,000 processor-hours on asupercomputerwith 256Itanium 2processors (equivalent to 13 days of full-time use of the computer).
On 17 August 2004, at the Rump Session of CRYPTO 2004, preliminary results were announced byWang, Feng, Lai, and Yu, about an attack onMD5, SHA-0 and other hash functions. The complexity of their attack on SHA-0 is 240, significantly better than the attack by Jouxet al.[33][34]
In February 2005, an attack byXiaoyun Wang,Yiqun Lisa Yin, and Hongbo Yu was announced which could find collisions in SHA-0 in 239operations.[5][35]
Another attack in 2008 applying theboomerang attackbrought the complexity of finding collisions down to 233.6, which was estimated to take 1 hour on an average PC from the year 2008.[36]
In light of the results for SHA-0, some experts[who?]suggested that plans for the use of SHA-1 in newcryptosystemsshould be reconsidered. After the CRYPTO 2004 results were published, NIST announced that they planned to phase out the use of SHA-1 by 2010 in favor of the SHA-2 variants.[37]
In early 2005,Vincent RijmenandElisabeth Oswaldpublished an attack on a reduced version of SHA-1 – 53 out of 80 rounds – which finds collisions with a computational effort of fewer than 280operations.[38]
In February 2005, an attack byXiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu was announced.[5]The attacks can find collisions in the full version of SHA-1, requiring fewer than 269operations. (Abrute-force searchwould require 280operations.)
The authors write: "In particular, our analysis is built upon the original differential attack on SHA-0, the near collision attack on SHA-0, the multiblock collision techniques, as well as the message modification techniques used in the collision search attack on MD5. Breaking SHA-1 would not be possible without these powerful analytical techniques."[39]The authors have presented a collision for 58-round SHA-1, found with 233hash operations. The paper with the full attack description was published in August 2005 at the CRYPTO conference.
In an interview, Yin states that, "Roughly, we exploit the following two weaknesses: One is that the file preprocessing step is not complicated enough; another is that certain math operations in the first 20 rounds have unexpected security problems."[40]
On 17 August 2005, an improvement on the SHA-1 attack was announced on behalf ofXiaoyun Wang,Andrew YaoandFrances Yaoat the CRYPTO 2005 Rump Session, lowering the complexity required for finding a collision in SHA-1 to 263.[7]On 18 December 2007 the details of this result were explained and verified by Martin Cochran.[41]
Christophe De Cannière and Christian Rechberger further improved the attack on SHA-1 in "Finding SHA-1 Characteristics: General Results and Applications,"[42]receiving the Best Paper Award atASIACRYPT2006. A two-block collision for 64-round SHA-1 was presented, found using unoptimized methods with 235compression function evaluations. Since this attack requires the equivalent of about 235evaluations, it is considered to be a significant theoretical break.[43]Their attack was extended further to 73 rounds (of 80) in 2010 by Grechnikov.[44]In order to find an actual collision in the full 80 rounds of the hash function, however, tremendous amounts of computer time are required. To that end, a collision search for SHA-1 using the volunteer computing platformBOINCbegan August 8, 2007, organized by theGraz University of Technology. The effort was abandoned May 12, 2009 due to lack of progress.[45]
At the Rump Session of CRYPTO 2006, Christian Rechberger and Christophe De Cannière claimed to have discovered a collision attack on SHA-1 that would allow an attacker to select at least parts of the message.[46][47]
In 2008, an attack methodology by Stéphane Manuel reported hash collisions with an estimated theoretical complexity of 251to 257operations.[48]However he later retracted that claim after finding that local collision paths were not actually independent, and finally quoting for the most efficient a collision vector that was already known before this work.[49]
Cameron McDonald, Philip Hawkes and Josef Pieprzyk presented a hash collision attack with claimed complexity 252at the Rump Session of Eurocrypt 2009.[50]However, the accompanying paper, "Differential Path for SHA-1 with complexityO(252)" has been withdrawn due to the authors' discovery that their estimate was incorrect.[51]
One attack against SHA-1 was Marc Stevens[52]with an estimated cost of $2.77M (2012) to break a single hash value by renting CPU power from cloud servers.[53]Stevens developed this attack in a project called HashClash,[54]implementing a differential path attack. On 8 November 2010, he claimed he had a fully working near-collision attack against full SHA-1 working with an estimated complexity equivalent to 257.5SHA-1 compressions. He estimated this attack could be extended to a full collision with a complexity around 261.
On 8 October 2015, Marc Stevens, Pierre Karpman, and Thomas Peyrin published a freestart collision attack on SHA-1's compression function that requires only 257SHA-1 evaluations. This does not directly translate into a collision on the full SHA-1 hash function (where an attacker isnotable to freely choose the initial internal state), but undermines the security claims for SHA-1. In particular, it was the first time that an attack on full SHA-1 had beendemonstrated; all earlier attacks were too expensive for their authors to carry them out. The authors named this significant breakthrough in thecryptanalysisof SHA-1The SHAppening.[10]
The method was based on their earlier work, as well as the auxiliary paths (or boomerangs) speed-up technique from Joux and Peyrin, and using high performance/cost efficient GPU cards fromNvidia. The collision was found on a 16-node cluster with a total of 64 graphics cards. The authors estimated that a similar collision could be found by buying US$2,000 of GPU time onEC2.[10]
The authors estimated that the cost of renting enough of EC2 CPU/GPU time to generate a full collision for SHA-1 at the time of publication was between US$75K and $120K, and noted that was well within the budget of criminal organizations, not to mention nationalintelligence agencies. As such, the authors recommended that SHA-1 be deprecated as quickly as possible.[10]
On 23 February 2017, theCWI (Centrum Wiskunde & Informatica)and Google announced theSHAtteredattack, in which they generated two different PDF files with the same SHA-1 hash in roughly 263.1SHA-1 evaluations. This attack is about 100,000 times faster than brute forcing a SHA-1 collision with abirthday attack, which was estimated to take 280SHA-1 evaluations. The attack required "the equivalent processing power of 6,500 years of single-CPU computations and 110 years of single-GPU computations".[2]
On 24 April 2019 a paper by Gaëtan Leurent and Thomas Peyrin presented at Eurocrypt 2019 described an enhancement to the previously bestchosen-prefix attackinMerkle–Damgård–like digest functions based onDavies–Meyerblock ciphers. With these improvements, this method is capable of finding chosen-prefix collisions in approximately 268SHA-1 evaluations. This is approximately 1 billion times faster (and now usable for many targeted attacks, thanks to the possibility of choosing a prefix, for example malicious code or faked identities in signed certificates) than the previous attack's 277.1evaluations (but without chosen prefix, which was impractical for most targeted attacks because the found collisions were almost random)[1]and is fast enough to be practical for resourceful attackers, requiring approximately $100,000 of cloud processing. This method is also capable of finding chosen-prefix collisions in theMD5function, but at a complexity of 246.3does not surpass the prior best available method at a theoretical level (239), though potentially at a practical level (≤249).[55]This attack has a memory requirement of 500+ GB.
On 5 January 2020 the authors published an improved attack called "shambles".[8]In this paper they demonstrate a chosen-prefix collision attack with a complexity of 263.4, that at the time of publication would cost US$45K per generated collision.
Implementations of all FIPS-approved security functions can be officially validated through theCMVP program, jointly run by theNational Institute of Standards and Technology(NIST) and theCommunications Security Establishment(CSE). For informal verification, a package to generate a high number of test vectors is made available for download on the NIST site; the resulting verification, however, does not replace the formal CMVP validation, which is required by law for certain applications.
As of December 2013[update], there are over 2000 validated implementations of SHA-1, with 14 of them capable of handling messages with a length in bits not a multiple of eight (seeSHS Validation ListArchived2011-08-23 at theWayback Machine).
These are examples of SHA-1message digestsin hexadecimal and inBase64binary toASCIItext encoding.
Even a small change in the message will, with overwhelming probability, result in many bits changing due to theavalanche effect. For example, changingdogtocogproduces a hash with different values for 81 of the 160 bits:
The hash of the zero-length string is:
Pseudocodefor the SHA-1 algorithm follows:
The numberhhis the message digest, which can be written in hexadecimal (base 16).
The chosen constant values used in the algorithm were assumed to benothing up my sleeve numbers:
Instead of the formulation from the original FIPS PUB 180-1 shown, the following equivalent expressions may be used to computefin the main loop above:
It was also shown[57]that for the rounds 32–79 the computation of:
can be replaced with:
This transformation keeps all operands 64-bit aligned and, by removing the dependency ofw[i]onw[i-3], allows efficient SIMD implementation with a vector length of 4 likex86SSEinstructions.
In the table below,internal statemeans the "internal hash sum" after each compression of a data block.
Below is a list of cryptography libraries that support SHA-1:
Hardware acceleration is provided by the following processor extensions:
In the wake of SHAttered, Marc Stevens and Dan Shumow published "sha1collisiondetection" (SHA-1CD), a variant of SHA-1 that detects collision attacks and changes the hash output when one is detected. The false positive rate is 2−90.[64]SHA-1CD is used byGitHubsince March 2017 andgitsince version 2.13.0 of May 2017.[65]
|
https://en.wikipedia.org/wiki/SHA-1
|
Thehierarchical hidden Markov model (HHMM)is astatistical modelderived from thehidden Markov model(HMM). In an HHMM, each state is considered to be a self-containedprobabilistic model. More precisely, each state of the HHMM is itself an HHMM.
HHMMs and HMMs are useful in many fields, includingpattern recognition.[1][2]
It is sometimes useful to use HMMs in specific structures in order to facilitate learning and generalization. For example, even though a fully connected HMM could always be used if enough training data is available, it is often useful to constrain the model by not allowing arbitrary state transitions. In the same way it can be beneficial to embed the HMM into a greater structure; which, theoretically, may not be able to solve any other problems than the basic HMM but can solve some problems more efficiently when it comes to the amount of training data required.
In the hierarchical hidden Markov model (HHMM), each state is considered to be a self-contained probabilistic model. More precisely, each state of the HHMM is itself an HHMM. This implies that the states of the HHMM emit sequences of observation symbols rather than single observation symbols as is the case for the standard HMM states.
When a state in an HHMM is activated, it will activate its own probabilistic model, i.e. it will activate one of the states of the underlying HHMM, which in turn may activate its underlying HHMM and so on. The process is repeated until a special state, called a production state, is activated. Only the production states emit observation symbols in the usual HMM sense. When the production state has emitted a symbol, control returns to the state that activated the production state.
The states that do not directly emit observations symbols are called internal states. The activation of a state in an HHMM under an internal state is called avertical transition. After a vertical transition is completed, ahorizontal transitionoccurs to a state within the same level. When a horizontal transition leads to aterminatingstate, control is returned to
the state in the HHMM, higher up in the hierarchy, that produced the last vertical transition.
Note that a vertical transition can result in more vertical transitions before reaching a sequence of production states and
finally returning to the top level. Thus the production states visited give rise to a sequence of observation symbols that is "produced" by the state at the top level.
The methods for estimating the HHMM parameters and model structure are more complex than for HMM parameters, and the interested reader is referred to Fineet al.(1998).
The HMM and HHMM belong to the same class of classifiers. That is, they can be used to solve the
same set of problems. In fact, the HHMM can be transformed into a standard HMM. However, the HHMM leverages its structure to solve a subset of the problems more efficiently.
Classical HHMMs require a pre-defined topology, meaning that the number and hierarchical structure of the submodels must be known in advance.[1]Samko et al. (2010) used information about states from feature space (i. e., from outside the Markov Model itself) in order to define the topology for a new HHMM in an unsupervised way.[2]However, such external data containing relevant information for HHMM construction may not be available in all contexts, e. g. in language processing.
|
https://en.wikipedia.org/wiki/Hierarchical_hidden_Markov_model
|
Inmathematics, anoperationis afunctionfrom asetto itself. For example, an operation onreal numberswill take in real numbers and return a real number. An operation can take zero or more input values (also called "operands" or "arguments") to a well-defined output value. The number of operands is thearityof the operation.
The most commonly studied operations arebinary operations(i.e., operations of arity 2), such asadditionandmultiplication, andunary operations(i.e., operations of arity 1), such asadditive inverseandmultiplicative inverse. An operation ofarityzero, ornullary operation, is aconstant.[1][2]Themixed productis an example of an operation of arity 3, also calledternary operation.
Generally, the arity is taken to be finite. However,infinitary operationsare sometimes considered,[1]in which case the "usual" operations of finite arity are calledfinitary operations.
Apartial operationis defined similarly to an operation, but with apartial functionin place of a function.
There are two common types of operations:unaryandbinary. Unary operations involve only one value, such asnegationandtrigonometric functions.[3]Binary operations, on the other hand, take two values, and includeaddition,subtraction,multiplication,division, andexponentiation.[4]
Operations can involve mathematical objects other than numbers. Thelogical valuestrueandfalsecan be combined usinglogic operations, such asand,or,andnot.Vectorscan be added and subtracted.[5]Rotationscan be combined using thefunction compositionoperation, performing the first rotation and then the second. Operations onsetsinclude the binary operationsunionandintersectionand the unary operation ofcomplementation.[6][7][8]Operations onfunctionsincludecompositionandconvolution.[9][10]
Operations may not be defined for every possible value of itsdomain. For example, in the real numbers one cannot divide by zero[11]or take square roots of negative numbers. The values for which an operation is defined form a set called itsdomain of definitionoractive domain. The set which contains the values produced is called thecodomain, but the set of actual values attained by the operation is its codomain of definition, active codomain,imageorrange.[12]For example, in the real numbers, the squaring operation only produces non-negative numbers; the codomain is the set of real numbers, but the range is the non-negative numbers.
Operations can involve dissimilar objects: a vector can be multiplied by ascalarto form another vector (an operation known asscalar multiplication),[13]and theinner productoperation on two vectors produces a quantity that is scalar.[14][15]An operation may or may not have certain properties, for example it may beassociative,commutative,anticommutative,idempotent, and so on.
The values combined are calledoperands,arguments, orinputs, and the value produced is called thevalue,result, oroutput. Operations can have fewer or more than two inputs (including the case of zero input and infinitely many inputs[1]).
Anoperatoris similar to an operation in that it refers to the symbol or the process used to denote the operation. Hence, their point of view is different. For instance, one often speaks of "the operation of addition" or "the addition operation," when focusing on the operands and result, but one switch to "addition operator" (rarely "operator of addition"), when focusing on the process, or from the more symbolic viewpoint, the function+:X×X→X(where X is a set such as the set of real numbers).
Ann-ary operationωon asetXis afunctionω:Xn→X. The setXnis called thedomainof the operation, the output set is called thecodomainof the operation, and the fixed non-negative integern(the number of operands) is called thearityof the operation. Thus aunary operationhas arity one, and abinary operationhas arity two. An operation of arity zero, called anullaryoperation, is simply an element of the codomainY. Ann-ary operation can also be viewed as an(n+ 1)-aryrelationthat istotalon itsninput domains anduniqueon its output domain.
Ann-ary partial operationωfromXntoXis apartial functionω:Xn→X. Ann-ary partial operation can also be viewed as an(n+ 1)-ary relation that is unique on its output domain.
The above describes what is usually called afinitary operation, referring to the finite number of operands (the valuen). There are obvious extensions where the arity is taken to be an infiniteordinalorcardinal,[1]or even an arbitrary set indexing the operands.
Often, the use of the termoperationimplies that the domain of the function includes a power of the codomain (i.e. theCartesian productof one or more copies of the codomain),[16]although this is by no means universal, as in the case ofdot product, where vectors are multiplied and result in a scalar. Ann-ary operationω:Xn→Xis called aninternal operation. Ann-ary operationω:Xi×S×Xn−i− 1→Xwhere0 ≤i<nis called anexternal operationby thescalar setoroperator setS. In particular for a binary operation,ω:S×X→Xis called aleft-external operationbyS, andω:X×S→Xis called aright-external operationbyS. An example of an internal operation isvector addition, where two vectors are added and result in a vector. An example of an external operation isscalar multiplication, where a vector is multiplied by a scalar and result in a vector.
Ann-ary multifunctionormultioperationωis a mapping from a Cartesian power of a set into the set of subsets of that set, formallyω:Xn→P(X){\displaystyle \omega :X^{n}\rightarrow {\mathcal {P}}(X)}.[17]
|
https://en.wikipedia.org/wiki/Operation_(mathematics)
|
Inprogramming language theory,semanticsis the rigorous mathematical study of the meaning ofprogramming languages.[1]Semantics assignscomputationalmeaning to validstringsin aprogramming language syntax. It is closely related to, and often crosses over with, thesemantics of mathematical proofs.
Semanticsdescribes the processes a computer follows whenexecutinga program in that specific language. This can be done by describing the relationship between the input and output of a program, or giving an explanation of how the program will be executed on a certainplatform, thereby creating amodel of computation.
In 1967,Robert W. Floydpublished the paperAssigning meanings to programs; his chief aim was "a rigorous standard for proofs about computer programs, includingproofs of correctness, equivalence, and termination".[2][3]Floyd further wrote:[2]
A semantic definition of a programming language, in our approach, is founded on asyntacticdefinition. It must specify which of the phrases in a syntactically correct program representcommands, and whatconditionsmust be imposed on an interpretation in the neighborhood of each command.
In 1969,Tony Hoarepublished a paper onHoare logicseeded by Floyd's ideas, now sometimes collectively calledaxiomatic semantics.[4][5]
In the 1970s, the termsoperational semanticsanddenotational semanticsemerged.[5]
The field of formal semantics encompasses all of the following:
It has close links with other areas ofcomputer sciencesuch asprogramming language design,type theory,compilersandinterpreters,program verificationandmodel checking.
There are many approaches to formal semantics; these belong to three major classes:
Apart from the choice between denotational, operational, or axiomatic approaches, most variations in formal semantic systems arise from the choice of supporting mathematical formalism.[citation needed]
Some variations of formal semantics include the following:
For a variety of reasons, one might wish to describe the relationships between different formal semantics. For example:
It is also possible to relate multiple semantics throughabstractionsvia the theory ofabstract interpretation.[citation needed]
|
https://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages
|
Incomputer science, asynthetic file systemor apseudo file systemis a hierarchical interface to non-file objects that appear as if they were regular files in the tree of a disk-based or long-term-storagefile system. These non-file objects may be accessed with the samesystem callsorutility programsas regular files anddirectories. The common term for both regular files and the non-file objects isnode.
The benefit of synthetic file systems is that well-known file system semantics can be reused for a universal and easily implementable approach tointerprocess communication. Clients can use such a file system to perform simple file operations on its nodes and do not have to implement complexmessage encoding and passingmethods and other aspects ofprotocol engineering. For most operations, common file utilities can be used, so evenscriptingis quite easy.
This is commonly known aseverything is a fileand is generally regarded to have originated fromUnix.
In the Unix-world, there is commonly a special filesystemmountedat/proc. This filesystem is implemented within thekerneland publishes information aboutprocesses. For each process, there is a directory (named by theprocess ID), containing detailed information about the process:status, open files,memory maps, mounts, etc.
/proc first appeared in Unix 8th Edition,[1]and its functionality was greatly expanded inPlan 9 from Bell Labs.[2]
The /sys filesystem on Linux complements /proc, by providing a lot of (non-process related) detailed information about the in-kernel status to userspace. More traditional Unix systems locate this information in sysctl calls.
ObexFS is aFUSE-based filesystem that provides access toOBEXobjects via a filesystem. Applications can work on remote objects via the OBEX protocol as if they were simply (local) files.
On thePlan 9 from Bell Labsoperating system family, the concept of9Psynthetic filesystem is used as a genericIPCmethod. Contrary to most other operating systems, Plan 9's design is heavily distributed: while in other OS worlds, there are many (and often large) libraries and frameworks for common things, Plan 9 encapsulates them into fileservers. The most important benefit is that applications can be much simpler and that services run network and platform agnostic - they can reside on virtually any host and platform in the network, and virtually any kind of network, as long the fileserver can be mounted by the application.
Plan 9 drives this concept expansively: most operating system services, e.g. hardware access and networking stack are presented as fileservers. This way it is trivial to use these resources remotely (e.g. one host directly accessing another host's block devices or network interfaces) without the need of additional protocols.
Other implementations of the 9P file system protocol also exists for many other systems and environments.[3]
Debugging embedded systems or even system-on-chip (SoC) devices is widely known to be difficult.[citation needed]Several protocols have been implemented to provide direct access to in-chip devices, but they tend to be proprietary, complex and hard to handle.
Based on9P, Plan 9's network filesystem, studies suggest using synthetic filesystems as universal access scheme to that information. The major benefit is that 9P is very simple and so quite easy to implement in hardware and can be easily used and over virtually any kind of network (from a serial link up to the internet).
The major argument for using synthetic filesystems might be the flexibility and easy access toservice-oriented architectures. Once a noticeable number of applications use this scheme, the overall overhead (code, resource consumption, maintenance work) can be reduced significantly. Many general arguments for SOAs also apply here.
Arguments against synthetic filesystems include the fact that filesystem semantics may not fit all application scenarios. For example, complexremote procedure callswith many parameters tend to be hard to map to filesystem schemes,[citation needed]and may require application redesign.
|
https://en.wikipedia.org/wiki/Synthetic_file_system
|
Pseudoreplication(sometimesunit of analysis error[1]) has many definitions. Pseudoreplication was originally defined in 1984 byStuart H. Hurlbert[2]as the use of inferential statistics to test for treatment effects with data from experiments where either treatments are not replicated (though samples may be) or
replicates are not statistically independent. Subsequently, Millar and Anderson[3]identified it as a special case of inadequate specification of random factors where both random and fixed factors are present. It is sometimes narrowly interpreted as an inflation of the number of samples or replicates which are not statistically independent.[4]This definition omits the confounding of unit and treatment effects in a misspecifiedF-ratio. In practice, incorrect F-ratios for statistical tests of fixed effects often arise from a default F-ratio that is formed over the error rather the mixed term.
Lazic[5]defined pseudoreplication as a problem of correlated samples (e.g. fromlongitudinal studies) where correlation is not taken into account when computing the confidence interval for the sample mean. For the effect of serial or temporal correlation also seeMarkov chain central limit theorem.
The problem of inadequate specification arises when treatments are assigned to units that are subsampled and the treatmentF-ratioin an analysis of variance (ANOVA) table is formed with respect to the residual mean square rather than with respect to the among unit mean square. The F-ratio relative to the within unit mean square is vulnerable to theconfoundingof treatment and unit effects, especially when experimental unit number is small (e.g. four tank units, two tanks treated, two not treated, several subsamples per tank). The problem is eliminated by forming the F-ratio relative to the correct mean square in the ANOVA table (tank by treatment MS in the example above), where this is possible. The problem is addressed by the use of mixed models.[3]
Hurlbert reported "pseudoreplication" in 48% of the studies he examined, that used inferential statistics.[2]Several studies examining scientific papers published up to 2016 similarly found about half of the papers were suspected of pseudoreplication.[4]When time and resources limit the number ofexperimental units, and unit effects cannot be eliminated statistically by testing over the unit variance, it is important to use other sources of information to evaluate the degree to which an F-ratio is confounded by unit effects.
Replicationincreases the precision of an estimate, while randomization addresses the broader applicability of a sample to a population. Replication must be appropriate: replication at the experimental unit level must be considered, in addition to replication within units.
Statistical tests(e.g.t-testand the related ANOVA family of tests) rely on appropriate replication to estimatestatistical significance. Tests based on the t and F distributions assume homogeneous, normal, and independent errors. Correlated errors can lead to false precision and p-values that are too small.[6]
Hurlbert (1984) defined four types of pseudoreplication.
|
https://en.wikipedia.org/wiki/Pseudoreplication
|
Incryptography, ahybrid cryptosystemis one which combines the convenience of apublic-key cryptosystemwith the efficiency of asymmetric-key cryptosystem.[1]Public-key cryptosystems are convenient in that they do not require the sender and receiver to share acommon secretin order to communicate securely.[2]However, they often rely on complicated mathematical computations and are thus generally much more inefficient than comparable symmetric-key cryptosystems. In many applications, the high cost of encrypting long messages in a public-key cryptosystem can be prohibitive. This is addressed by hybrid systems by using a combination of both.[3]
A hybrid cryptosystem can be constructed using any two separate cryptosystems:
The hybrid cryptosystem is itself a public-key system, whose public and private keys are the same as in the key encapsulation scheme.[4]
Note that for very long messages the bulk of the work in encryption/decryption is done by the more efficient symmetric-key scheme, while the inefficient public-key scheme is used only to encrypt/decrypt a short key value.[3]
All practical implementations of public key cryptography today employ the use of a hybrid system. Examples include theTLSprotocol[5]and theSSHprotocol,[6]that use a public-key mechanism for key exchange (such asDiffie-Hellman) and a symmetric-key mechanism for data encapsulation (such asAES). TheOpenPGP[7]file format and thePKCS#7[8]file format are other examples.
Hybrid Public Key Encryption (HPKE, published asRFC 9180) is a modern standard for generic hybrid encryption. HPKE is used within multiple IETF protocols, includingMLSand TLS Encrypted Hello.
Envelope encryption is an example of a usage of hybrid cryptosystems incloud computing. In a cloud context, hybrid cryptosystems also enable centralizedkey management.[9][10]
To encrypt a message addressed to Alice in a hybrid cryptosystem, Bob does the following:
To decrypt this hybrid ciphertext, Alice does the following:
If both the key encapsulation and data encapsulation schemes in a hybrid cryptosystem are secure againstadaptive chosen ciphertext attacks, then the hybrid scheme inherits that property as well.[4]However, it is possible to construct a hybrid scheme secure against adaptive chosen ciphertext attacks even if the key encapsulation has a slightly weakened security definition (though the security of the data encapsulation must be slightly stronger).[12]
Envelope encryption is term used for encrypting with a hybrid cryptosystem used by all majorcloud service providers,[9]often as part of a centralizedkey managementsystem in cloud computing.[13]
Envelope encryption gives names to the keys used in hybrid encryption: Data Encryption Keys (abbreviated DEK, and used to encrypt data) and Key Encryption Keys (abbreviated KEK, and used to encrypt the DEKs). In a cloud environment, encryption with envelope encryption involves generating a DEK locally, encrypting one's data using the DEK, and then issuing a request to wrap (encrypt) the DEK with a KEK stored in a potentially more secureservice. Then, this wrapped DEK and encrypted message constitute aciphertextfor the scheme. To decrypt a ciphertext, the wrapped DEK is unwrapped (decrypted) via a call to a service, and then the unwrapped DEK is used to decrypt the encrypted message.[10]In addition to the normal advantages of a hybrid cryptosystem, using asymmetric encryption for the KEK in a cloud context provides easier key management and separation of roles, but can be slower.[13]
In cloud systems, such asGoogle Cloud PlatformandAmazon Web Services, a key management system (KMS) can be available as a service.[13][10][14]In some cases, the key management system will store keys inhardware security modules, which are hardware systems that protect keys with hardware features like intrusion resistance.[15]This means that KEKs can also be more secure because they are stored on secure specialized hardware.[13]Envelope encryption makes centralized key management easier because a centralized key management system only needs to store KEKs, which occupy less space, and requests to the KMS only involve sending wrapped and unwrapped DEKs, which use less bandwidth than transmitting entire messages. Since one KEK can be used to encrypt many DEKs, this also allows for less storage space to be used in the KMS. This also allows for centralized auditing and access control at one point of access.[10]
|
https://en.wikipedia.org/wiki/Hybrid_cryptosystem
|
Areview siteis awebsiteon which reviews can be posted about people, businesses, products, or services. These sites may useWeb 2.0techniques to gather reviews from site users or may employ professional writers to author reviews on the topic of concern for the site.
Early examples of review sites included ConsumerDemocracy.com, Complaints.com, planetfeedback.com,[1]Epinions.com[2]andThatGuyWithTheGlasses.com(later rebranded to Channel Awesome in 2014).[3]
Review sites are generally supported by advertising. Some business review sites may also allow businesses to pay for enhanced listings, which do not affect the reviews and ratings. Product review sites may be supported by providingaffiliatelinks to the websites that sell the reviewed items, which pay the site on a per-click or per-sale basis.
With the growing popularity of affiliate programs on theInternet, a new sort of review site has emerged: the affiliate product review site. This type of site is usually professionally designed and written to maximize conversions, and is used by e-commerce marketers. It is often based on ablogplatform likeWordPressorSquarespace, has a privacy and contact page to help withSEO, and has commenting and interactivity turned off. It will also have an e-mail gathering device in the form of anopt-in, ordrop-down listto help the aspiringe-commercebusiness person build ane-mail listto market to.
Because of the specialized marketing thrust of this type of website, the reviews are not always seen to be objective by consumers. Because of this, the FTC has provided several guidelines requiring publishers to disclose when they benefit monetarily from the content in the form of advertising, affiliate marketing, etc.[4]
Studies by independent research groups show that rating and review sites influence consumer shopping behavior.[citation needed]In an academic study published in 2008, empirical results demonstrated that the number of online user reviews is a good indicator of the intensity of underlying word-of-mouth effect and increase awareness.[5]
Originally, reviews were generally anonymous, and in many countries, review sites often have policies that preclude the release of any identifying information without a court order. According to Kurt Opsahl, a staff attorney for theElectronic Frontier Foundation(EFF), anonymity of reviewers is important.[6]
Reviewers are always required to provide an email address and are often encouraged to use their real name. Yelp also requires a photo of the reviewer.[7]
Arating site(commonly known as arate-me site) is awebsitedesigned forusersto vote, rate people,content, or other things. Rating sites can range from tangible to non-tangible attributes, but most commonly, rating sites are based around physical appearances such as body parts, voice, personality, etc. They may also be devoted to the subjects' occupational ability, for example teachers, professors, lawyers, doctors, etc. Rating sites can typically be on anything a user can think of.[8]
Rating sites typically show a series of images (or other content) in random fashion, or chosen by computer algorithm, instead of allowing users to choose. Users are given a choice of rating or assessment, which is generally done quickly and without great deliberation. Users score items on a scale of 1 to 10, yes or no. Others, such as BabeVsBabe.com, ask users tochoose between two pictures. Typically, the site gives instant feedback in terms of the item's running score, or the percentage of other users who agree with the assessment. Rating sites sometimes offer aggregate statistics or "best" and "worst" lists. Most allow users to submit their own image, sample, or other relevant content for others to rate. Some require the submission as a condition of membership.
Rating sites usually provide some features ofsocial network servicesandonline communitiessuch asdiscussion forumsmessaging, andprivate messaging. Some function as a form ofdating service, in that for a fee they allow users to contact other users. Many social networks and other sites include rating features. For example,MySpaceand TradePics have optional "rank" features for users to be rated by other users.
One category of rating sites, such asHot or Notor HotFlation, is devoted to rating contributors' physical attractiveness. Other looks-based rating sites include RateMyFace.com (an early site, launched in the Summer of 1999) and NameMyVote, which asks users to guess a person's political party based on their looks. Some sites are devoted to rating the appearance of pets (e.g. kittenwar.com, petsinclothes.com, and meormypet.com). Another class allows users to rate short video or music clips. One variant, a "Darwinian poetry" site, allows users to compare two samples of entirely computer-generated poetry using aCondorcet method. Successful poems "mate" to produce poems of ever-increasing appeal. Yet others are devoted to disliked men (DoucheBagAlert),bowel movements(ratemypoo.com), unsigned bands (RateBandsOnline.com), politics (RateMyTory.Com), nightclubs, business professionals, clothes, cars, and many other subjects.
When rating sites are dedicated to rating products (epinions.com), brands (brandmojo.org), services, or businesses rather than to rating people (i-rate.me), and are used for more serious or well thought-out ratings, they tend to be called review sites, although the distinction is not exact.
The popularity of rating people and their abilities on a scale, such as 1–10, traces back to at least the late 20th century, and the algorithms for aggregating quantitative rating scores far earlier than that. The 1979 film10is an example of this. The title derives from a rating systemDudley Mooreuses to grade women based uponbeauty, with a 10 being the epitome of attractiveness. The notion of a "perfect ten" came into common usage as a result of this film.[citation needed]In the film, Moore ratesBo Derekan "11".
In 1990, one of the first computer-based photographic attractiveness rating studies was conducted. During this year psychologists J. H. Langlois and L. A. Roggman examined whether facial attractiveness was linked to geometric averageness. To test their hypothesis, they selected photographs of 192 male and female Caucasian faces; each of which was computer scanned and digitized. They then made computer-processed composites of each image, as 2-, 4-, 8-, 16-, and 32-face composites. The individual and composite faces were then rated for attractiveness by 300 judges on a 5-pointLikert scale(1 = very unattractive, 5 = very attractive). The 32-composite face was the most visually attractive of all the faces.[9]Subsequent studies were done on a 10-point scale.
In 1992,Perfect 10magazine and video programming was launched by Xui, the original executive editor ofSpinmagazine, to feature only women who would rank 10 for attractiveness. Julie Kruis, a swimsuit model, was the originalspokesmodel. In 1996, Rasen created the first "Perfect 10Model Search" at the Pure Platinum club nearFort Lauderdale, Florida. His contests were broadcast on Network 1, a domesticC-bandsatellite channel. Other unrelated "Perfect 10" contests became popular throughout the 1990s.
The first ratings sites started in 1999, with RateMyFace.com (created by Michael Hussey) and TeacherRatings.com (created by John Swapceinski, re-launched with Hussey and further developed by Patrick Nagle asRateMyProfessors). The most popular of all time, Hot or Not, was launched in October 2000. Hot or Not generated many spin-offs and imitators. There are now hundreds of such sites, and even meta-sites that categorize them all. The rating site concept has also been expanded to include Twitter and Facebook accounts that provide ratings, such as the humorous Twitter accountWeRateDogs.
Most review sites make little or no attempt to restrict postings, or to verify the information in the reviews. Critics point out that positive reviews are sometimes written by the businesses or individuals being reviewed, while negative reviews may be written by competitors, disgruntled employees, or anyone with a grudge against the business being reviewed. Some merchants also offer incentives for customers to review their products favorably, which skews reviews in their favor.[10]So calledreputation managementfirms may also submit false positive reviews on behalf of businesses. In 2011,RateMDs.comandYelpdetected dozens of positive reviews of doctors, submitted from the same IP addresses by a firm called Medical Justice.[11]
Furthermore, studies of research methodology have shown that in forums where people are able to post opinions publicly, group polarization often occurs, and the result is very positive comments, very negative comments, and little in between, meaning that those who would have been in the middle are either silent or pulled to one extreme or the other.[12]
Rating sites have a social feedback effect; some high school principals and administrators, for example, have begun to regularly monitor the status of their teaching staff via student controlled "rating sites". Some looks-based sites have come under criticism for promoting vanity and self-consciousness. Some claim they potentially expose users tosexual predators.
Most rating sites suffer from similarself-selection biassince only highly motivated individuals devote their time to completing these rankings, and not a fair sampling of the population.
Many operators of review sites acknowledge that reviews may not be objective, and that ratings may not be statistically valid.
In some cases, government authorities have taken legal actions against businesses that post false reviews. In 2009, the State of New York required Lifestyle Lift, a cosmetic surgery company, to pay $300,000 in fines.[13]
|
https://en.wikipedia.org/wiki/Rating_site
|
Acellular networkormobile networkis atelecommunications networkwhere the link to and from end nodes iswirelessand the network is distributed over land areas calledcells, each served by at least one fixed-locationtransceiver(such as abase station). These base stations provide the cell with the network coverage which can be used for transmission of voice, data, and other types of content viaradio waves. Each cell's coverage area is determined by factors such as the power of the transceiver, the terrain, and the frequency band being used. A cell typically uses a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed service quality within each cell.[1][2]
When joined together, these cells provide radio coverage over a wide geographic area. This enables numerousdevices, includingmobile phones,tablets,laptopsequipped withmobile broadband modems, andwearable devicessuch assmartwatches, to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the devices are moving through more than one cell during transmission. The design of cellular networks allows for seamlesshandover, enabling uninterrupted communication when a device moves from one cell to another.
Modern cellular networks utilize advanced technologies such asMultiple Input Multiple Output(MIMO),beamforming, and small cells to enhance network capacity and efficiency.
Cellular networks offer a number of desirable features:[2]
Major telecommunications providers have deployed voice and data cellular networks over most of the inhabited land area ofEarth. This allows mobile phones and other devices to be connected to thepublic switched telephone networkand publicInternet access. In addition to traditional voice and data services, cellular networks now supportInternet of Things(IoT) applications, connecting devices such assmart meters, vehicles, and industrial sensors.
The evolution of cellular networks from1Gto5Ghas progressively introduced faster speeds, lower latency, and support for a larger number of devices, enabling advanced applications in fields such as healthcare, transportation, andsmart cities.
Private cellular networks can be used for research[3]or for large organizations and fleets, such as dispatch for local public safety agencies or a taxicab company, as well as for local wireless communications in enterprise and industrial settings such as factories, warehouses, mines, power plants, substations, oil and gas facilities and ports.[4]
In acellular radiosystem, a land area to be supplied with radio service is divided into cells in a pattern dependent on terrain and reception characteristics. These cell patterns roughly take the form of regular shapes, such as hexagons, squares, or circles although hexagonal cells are conventional. Each of these cells is assigned with multiple frequencies (f1–f6) which have correspondingradio base stations. The group of frequencies can be reused in other cells, provided that the same frequencies are not reused in adjacent cells, which would causeco-channel interference.
The increasedcapacityin a cellular network, compared with a network with a single transmitter, comes from the mobile communication switching system developed byAmos Joelof Bell Labs[5]that permitted multiple callers in a given area to use the same frequency by switching calls to the nearest available cellular tower having that frequency available. This strategy is viable because a given radio frequency can be reused in a different area for an unrelated transmission. In contrast, a single transmitter can only handle one transmission for a given frequency. Inevitably, there is some level ofinterferencefrom the signal from the other cells which use the same frequency. Consequently, there must be at least one cell gap between cells which reuse the same frequency in a standardfrequency-division multiple access(FDMA) system.
Consider the case of a taxi company, where each radio has a manually operated channel selector knob to tune to different frequencies. As drivers move around, they change from channel to channel. The drivers are aware of whichfrequencyapproximately covers some area. When they do not receive a signal from the transmitter, they try other channels until finding one that works. The taxi drivers only speak one at a time when invited by the base station operator. This is a form oftime-division multiple access(TDMA).
The idea to establish a standard cellular phone network was first proposed on December 11, 1947. This proposal was put forward byDouglas H. Ring, aBell Labsengineer, in an internal memo suggesting the development of a cellular telephone system byAT&T.[6][7]
The first commercial cellular network, the1Ggeneration, was launched in Japan byNippon Telegraph and Telephone(NTT) in 1979, initially in the metropolitan area ofTokyo. However, NTT did not initially commercialize the system; the early launch was motivated by an effort to understand a practical cellular system rather than by an interest to profit from it.[8][9]In 1981, theNordic Mobile Telephonesystem was created as the first network to cover an entire country. The network was released in 1981 in Sweden and Norway, then in early 1982 in Finland and Denmark.Televerket, a state-owned corporation responsible for telecommunications in Sweden, launched the system.[8][10][11]
In September 1981,Jan Stenbeck, a financier and businessman, launchedComvik, a new Swedish telecommunications company. Comvik was the first European telecommunications firm to challenge the state's telephone monopoly on the industry.[12][13][14]According to some sources, Comvik was the first to launch a commercial automatic cellular system before Televerket launched its own in October 1981. However, at the time of the new network’s release, theSwedish Post and Telecom Authoritythreatened to shut down the system after claiming that the company had used an unlicensed automatic gear that could interfere with its own networks.[14][15]In December 1981, Sweden awarded Comvik with a license to operate its own automatic cellular network in the spirit of market competition.[14][15][16]
TheBell Systemhad developed cellular technology since 1947, and had cellular networks in operation inChicago, Illinois,[17]andDallas, Texas, prior to 1979; however, regulatory battles delayed AT&T's deployment of cellular service to 1983,[18]when itsRegional Holding CompanyIllinois Bellfirst provided cellular service.[19]
First-generation cellular network technology continued to expand its reach to the rest of the world. In 1990,Millicom Inc., a telecommunications service provider, strategically partnered with Comvik’s international cellular operations to become Millicom International Cellular SA.[20]The company went on to establish a 1G systems foothold in Ghana, Africa under the brand name Mobitel.[21]In 2006, the company’s Ghana operations were renamed to Tigo.[22]
Thewireless revolutionbegan in the early 1990s,[23][24][25]leading to the transition from analog todigital networks.[26]The MOSFET invented atBell Labsbetween 1955 and 1960,[27][28][29][30][31]was adapted for cellular networks by the early 1990s, with the wide adoption ofpower MOSFET,LDMOS(RF amplifier), andRF CMOS(RF circuit) devices leading to the development and proliferation of digital wireless mobile networks.[26][32][33]
The first commercial digital cellular network, the2Ggeneration, was launched in 1991. This sparked competition in the sector as the new operators challenged the incumbent 1G analog network operators.
To distinguish signals from several different transmitters, a number ofchannel access methodshave been developed, includingfrequency-division multiple access(FDMA, used by analog andD-AMPS[citation needed]systems),time-division multiple access(TDMA, used byGSM) andcode-division multiple access(CDMA, first used forPCS, and the basis of3G).[2]
With FDMA, the transmitting and receiving frequencies used by different users in each cell are different from each other. Each cellular call was assigned a pair of frequencies (one for base to mobile, the other for mobile to base) to providefull-duplexoperation. The originalAMPSsystems had 666 channel pairs, 333 each for theCLEC"A" system andILEC"B" system. The number of channels was expanded to 416 pairs per carrier, but ultimately the number of RF channels limits the number of calls that a cell site could handle. FDMA is a familiar technology to telephone companies, which usedfrequency-division multiplexingto add channels to their point-to-point wireline plants beforetime-division multiplexingrendered FDM obsolete.
With TDMA, the transmitting and receiving time slots used by different users in each cell are different from each other. TDMA typically usesdigitalsignaling tostore and forwardbursts of voice data that are fit into time slices for transmission, and expanded at the receiving end to produce a somewhat normal-sounding voice at the receiver. TDMA must introducelatency(time delay) into the audio signal. As long as the latency time is short enough that the delayed audio is not heard as an echo, it is not problematic. TDMA is a familiar technology for telephone companies, which usedtime-division multiplexingto add channels to their point-to-point wireline plants beforepacket switchingrendered FDM obsolete.
The principle of CDMA is based onspread spectrumtechnology developed for military use duringWorld War IIand improved during theCold Warintodirect-sequence spread spectrumthat was used for early CDMA cellular systems andWi-Fi. DSSS allows multiple simultaneous phone conversations to take place on a single wideband RF channel, without needing to channelize them in time or frequency. Although more sophisticated than older multiple access schemes (and unfamiliar to legacy telephone companies because it was not developed byBell Labs), CDMA has scaled well to become the basis for 3G cellular radio systems.
Other available methods of multiplexing such asMIMO, a more sophisticated version ofantenna diversity, combined with activebeamformingprovides much greaterspatial multiplexingability compared to original AMPS cells, that typically only addressed one to three unique spaces. Massive MIMO deployment allows much greater channel reuse, thus increasing the number of subscribers per cell site, greater data throughput per user, or some combination thereof.Quadrature Amplitude Modulation(QAM) modems offer an increasing number of bits per symbol, allowing more users per megahertz of bandwidth (and decibels of SNR), greater data throughput per user, or some combination thereof.
The key characteristic of a cellular network is the ability to reuse frequencies to increase both coverage and capacity. As described above, adjacent cells must use different frequencies, however, there is no problem with two cells sufficiently far apart operating on the same frequency, provided the masts and cellular network users' equipment do not transmit with too much power.[2]
The elements that determine frequency reuse are the reuse distance and the reuse factor. The reuse distance,Dis calculated as
whereRis the cell radius andNis the number of cells per cluster. Cells may vary in radius from 1 to 30 kilometres (0.62 to 18.64 mi). The boundaries of the cells can also overlap between adjacent cells and large cells can be divided into smaller cells.[34]
The frequency reuse factor is the rate at which the same frequency can be used in the network. It is1/K(orKaccording to some books) whereKis the number of cells which cannot use the same frequencies for transmission. Common values for the frequency reuse factor are 1/3, 1/4, 1/7, 1/9 and 1/12 (or 3, 4, 7, 9 and 12, depending on notation).[35]
In case ofNsector antennas on the same base station site, each with different direction, the base station site can serve N different sectors.Nis typically 3. Areuse patternofN/Kdenotes a further division in frequency amongNsector antennas per site. Some current and historical reuse patterns are 3/7 (North American AMPS), 6/4 (Motorola NAMPS), and 3/4 (GSM).
If the total availablebandwidthisB, each cell can only use a number of frequency channels corresponding to a bandwidth ofB/K, and each sector can use a bandwidth ofB/NK.
Code-division multiple access-based systems use a wider frequency band to achieve the same rate of transmission as FDMA, but this is compensated for by the ability to use a frequency reuse factor of 1, for example using a reuse pattern of 1/1. In other words, adjacent base station sites use the same frequencies, and the different base stations and users are separated by codes rather than frequencies. WhileNis shown as 1 in this example, that does not mean the CDMA cell has only one sector, but rather that the entire cell bandwidth is also available to each sector individually.
Recently alsoorthogonal frequency-division multiple accessbased systems such asLTEare being deployed with a frequency reuse of 1. Since such systems do not spread the signal across the frequency band,
inter-cell radio resource management is important to coordinate resource allocation between different cell sites and to limit the inter-cell interference. There are various means ofinter-cell interference coordination(ICIC) already defined in the standard.[36]Coordinated scheduling, multi-site MIMO or multi-site beamforming are other examples for inter-cell radio resource management that might be standardized in the future.
Cell towers frequently use adirectional signalto improve reception in higher-traffic areas. In theUnited States, theFederal Communications Commission(FCC) limits omnidirectional cell tower signals to 100 watts of power. If the tower has directional antennas, the FCC allows the cell operator to emit up to 500 watts ofeffective radiated power(ERP).[37]
Although the original cell towers created an even, omnidirectional signal, were at the centers of the cells and were omnidirectional, a cellular map can be redrawn with the cellular telephone towers located at the corners of the hexagons where three cells converge.[38]Each tower has three sets of directional antennas aimed in three different directions with 120 degrees for each cell (totaling 360 degrees) and receiving/transmitting into three different cells at different frequencies. This provides a minimum of three channels, and three towers for each cell and greatly increases the chances of receiving a usable signal from at least one direction.
The numbers in the illustration are channel numbers, which repeat every 3 cells. Large cells can be subdivided into smaller cells for high volume areas.[39]
Cell phone companies also use this directional signal to improve reception along highways and inside buildings like stadiums and arenas.[37]
Practically every cellular system has some kind of broadcast mechanism. This can be used directly for distributing information to multiple mobiles. Commonly, for example inmobile telephonysystems, the most important use of broadcast information is to set up channels for one-to-one communication between the mobile transceiver and the base station. This is calledpaging. The three different paging procedures generally adopted are sequential, parallel and selective paging.
The details of the process of paging vary somewhat from network to network, but normally we know a limited number of cells where the phone is located (this group of cells is called a Location Area in theGSMorUMTSsystem, or Routing Area if a data packet session is involved; inLTE, cells are grouped into Tracking Areas). Paging takes place by sending the broadcast message to all of those cells. Paging messages can be used for information transfer. This happens inpagers, inCDMAsystems for sendingSMSmessages, and in theUMTSsystem where it allows for low downlink latency in packet-based connections.
In LTE/4G, the Paging procedure is initiated by the MME when data packets need to be delivered to the UE.
Paging types supported by the MME are:
In a primitive taxi system, when the taxi moved away from a first tower and closer to a second tower, the taxi driver manually switched from one frequency to another as needed. If communication was interrupted due to a loss of a signal, the taxi driver asked the base station operator to repeat the message on a different frequency.
In a cellular system, as the distributed mobile transceivers move from cell to cell during an ongoing continuous communication, switching from one cell frequency to a different cell frequency is done electronically without interruption and without a base station operator or manual switching. This is called thehandoveror handoff. Typically, a new channel is automatically selected for the mobile unit on the new base station which will serve it. The mobile unit then automatically switches from the current channel to the new channel and communication continues.
The exact details of the mobile system's move from one base station to the other vary considerably from system to system (see the example below for how a mobile phone network manages handover).
The most common example of a cellular network is a mobile phone (cell phone) network. Amobile phoneis a portable telephone which receives or makes calls through acell site(base station) or transmitting tower.Radio wavesare used to transfer signals to and from the cell phone.
Modern mobile phone networks use cells because radio frequencies are a limited, shared resource. Cell-sites and handsets change frequency under computer control and use low power transmitters so that the usually limited number of radio frequencies can be simultaneously used by many callers with less interference.
A cellular network is used by themobile phone operatorto achieve both coverage and capacity for their subscribers. Large geographic areas are split into smaller cells to avoid line-of-sight signal loss and to support a large number of active phones in that area. All of the cell sites are connected totelephone exchanges(or switches), which in turn connect to thepublic telephone network.
In cities, each cell site may have a range of up to approximately1⁄2mile (0.80 km), while in rural areas, the range could be as much as 5 miles (8.0 km). It is possible that in clear open areas, a user may receive signals from a cell site 25 miles (40 km) away. In rural areas with low-band coverage and tall towers, basic voice and messaging service may reach 50 miles (80 km), with limitations on bandwidth and number of simultaneous calls.[citation needed]
Since almost all mobile phones usecellular technology, includingGSM,CDMA, andAMPS(analog), the term "cell phone" is in some regions, notably the US, used interchangeably with "mobile phone". However,satellite phonesare mobile phones that do not communicate directly with a ground-based cellular tower but may do so indirectly by way of a satellite.
There are a number of different digital cellular technologies, including:Global System for Mobile Communications(GSM),General Packet Radio Service(GPRS),cdmaOne,CDMA2000,Evolution-Data Optimized(EV-DO),Enhanced Data Rates for GSM Evolution(EDGE),Universal Mobile Telecommunications System(UMTS),Digital Enhanced Cordless Telecommunications(DECT),Digital AMPS(IS-136/TDMA), andIntegrated Digital Enhanced Network(iDEN). The transition from existing analog to the digital standard followed a very different path in Europe and theUS.[40]As a consequence, multiple digital standards surfaced in the US, whileEuropeand many countries converged towards theGSMstandard.
A simple view of the cellular mobile-radio network consists of the following:
This network is the foundation of theGSMsystem network. There are many functions that are performed by this network in order to make sure customers get the desired service including mobility management, registration, call set-up, andhandover.
Any phone connects to the network via an RBS (Radio Base Station) at a corner of the corresponding cell which in turn connects to theMobile switching center(MSC). The MSC provides a connection to thepublic switched telephone network(PSTN). The link from a phone to the RBS is called anuplinkwhile the other way is termeddownlink.
Radio channels effectively use the transmission medium through the use of the following multiplexing and access schemes:frequency-division multiple access(FDMA),time-division multiple access(TDMA),code-division multiple access(CDMA), andspace-division multiple access(SDMA).
Small cells, which have a smaller coverage area than base stations, are categorised as follows:
As the phone user moves from one cell area to another cell while a call is in progress, the mobile station will search for a new channel to attach to in order not to drop the call. Once a new channel is found, the network will command the mobile unit to switch to the new channel and at the same time switch the call onto the new channel.
WithCDMA, multiple CDMA handsets share a specific radio channel. The signals are separated by using apseudonoisecode (PN code) that is specific to each phone. As the user moves from one cell to another, the handset sets up radio links with multiple cell sites (or sectors of the same site) simultaneously. This is known as "soft handoff" because, unlike with traditionalcellular technology, there is no one defined point where the phone switches to the new cell.
InIS-95inter-frequency handovers and older analog systems such asNMTit will typically be impossible to test the target channel directly while communicating. In this case, other techniques have to be used such as pilot beacons in IS-95. This means that there is almost always a brief break in the communication while searching for the new channel followed by the risk of an unexpected return to the old channel.
If there is no ongoing communication or the communication can be interrupted, it is possible for the mobile unit to spontaneously move from one cell to another and then notify the base station with the strongest signal.
The effect of frequency on cell coverage means that different frequencies serve better for different uses. Low frequencies, such as 450 MHz NMT, serve very well for countryside coverage.GSM900 (900 MHz) is suitable for light urban coverage.GSM1800 (1.8 GHz) starts to be limited by structural walls.UMTS, at 2.1 GHz is quite similar in coverage toGSM1800.
Higher frequencies are a disadvantage when it comes to coverage, but it is a decided advantage when it comes to capacity. Picocells, covering e.g. one floor of a building, become possible, and the same frequency can be used for cells which are practically neighbors.
Cell service area may also vary due to interference from transmitting systems, both within and around that cell. This is true especially in CDMA based systems. The receiver requires a certainsignal-to-noise ratio, and the transmitter should not send with too high transmission power in view to not cause interference with other transmitters. As the receiver moves away from the transmitter, the power received decreases, so thepower controlalgorithm of the transmitter increases the power it transmits to restore the level of received power. As the interference (noise) rises above the received power from the transmitter, and the power of the transmitter cannot be increased anymore, the signal becomes corrupted and eventually unusable. InCDMA-based systems, the effect of interference from other mobile transmitters in the same cell on coverage area is very marked and has a special name,cell breathing.
One can see examples of cell coverage by studying some of the coverage maps provided by real operators on their web sites or by looking at independently crowdsourced maps such asOpensignalorCellMapper. In certain cases they may mark the site of the transmitter; in others, it can be calculated by working out the point of strongest coverage.
Acellular repeateris used to extend cell coverage into larger areas. They range from wideband repeaters for consumer use in homes and offices to smart or digital repeaters for industrial needs.
The following table shows the dependency of the coverage area of one cell on the frequency of aCDMA2000network:[41]
Lists and technical information:
Starting with EVDO the following techniques can also be used to improve performance:
Equipment:
Other:
|
https://en.wikipedia.org/wiki/Frequency_reuse
|
Inlinear algebra, theidentity matrixof sizen{\displaystyle n}is then×n{\displaystyle n\times n}square matrixwithoneson themain diagonalandzeroselsewhere. It has unique properties, for example when the identity matrix represents ageometric transformation, the object remains unchanged by the transformation. In other contexts, it is analogous to multiplying by the number 1.
The identity matrix is often denoted byIn{\displaystyle I_{n}}, or simply byI{\displaystyle I}if the size is immaterial or can be trivially determined by the context.[1]
I1=[1],I2=[1001],I3=[100010001],…,In=[100⋯0010⋯0001⋯0⋮⋮⋮⋱⋮000⋯1].{\displaystyle I_{1}={\begin{bmatrix}1\end{bmatrix}},\ I_{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ I_{3}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}},\ \dots ,\ I_{n}={\begin{bmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\0&0&1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &1\end{bmatrix}}.}
The termunit matrixhas also been widely used,[2][3][4][5]but the termidentity matrixis now standard.[6]The termunit matrixis ambiguous, because it is also used for amatrix of onesand for anyunitof thering of alln×n{\displaystyle n\times n}matrices.[7]
In some fields, such asgroup theoryorquantum mechanics, the identity matrix is sometimes denoted by a boldface one,1{\displaystyle \mathbf {1} }, or called "id" (short for identity). Less frequently, some mathematics books useU{\displaystyle U}orE{\displaystyle E}to represent the identity matrix, standing for "unit matrix"[2]and the German wordEinheitsmatrixrespectively.[8]
In terms of a notation that is sometimes used to concisely describediagonal matrices, the identity matrix can be written asIn=diag(1,1,…,1).{\displaystyle I_{n}=\operatorname {diag} (1,1,\dots ,1).}The identity matrix can also be written using theKronecker deltanotation:[8](In)ij=δij.{\displaystyle (I_{n})_{ij}=\delta _{ij}.}
WhenA{\displaystyle A}is anm×n{\displaystyle m\times n}matrix, it is a property ofmatrix multiplicationthatImA=AIn=A.{\displaystyle I_{m}A=AI_{n}=A.}In particular, the identity matrix serves as themultiplicative identityof thematrix ringof alln×n{\displaystyle n\times n}matrices, and as theidentity elementof thegeneral linear groupGL(n){\displaystyle GL(n)}, which consists of allinvertiblen×n{\displaystyle n\times n}matrices under the matrix multiplication operation. In particular, the identity matrix is invertible. It is aninvolutory matrix, equal to its own inverse. In this group, two square matrices have the identity matrix as their product exactly when they are the inverses of each other.
Whenn×n{\displaystyle n\times n}matrices are used to representlinear transformationsfrom ann{\displaystyle n}-dimensional vector space to itself, the identity matrixIn{\displaystyle I_{n}}represents theidentity function, for whateverbasiswas used in this representation.
Thei{\displaystyle i}th column of an identity matrix is theunit vectorei{\displaystyle e_{i}}, a vector whosei{\displaystyle i}th entry is 1 and 0 elsewhere. Thedeterminantof the identity matrix is 1, and itstraceisn{\displaystyle n}.
The identity matrix is the onlyidempotent matrixwith non-zero determinant. That is, it is the only matrix such that:
Theprincipal square rootof an identity matrix is itself, and this is its onlypositive-definitesquare root. However, every identity matrix with at least two rows and columns has an infinitude of symmetric square roots.[9]
Therankof an identity matrixIn{\displaystyle I_{n}}equals the sizen{\displaystyle n}, i.e.:rank(In)=n.{\displaystyle \operatorname {rank} (I_{n})=n.}
|
https://en.wikipedia.org/wiki/Identity_matrix
|
Arecommender system (RecSys), or arecommendation system(sometimes replacingsystemwith terms such asplatform,engine, oralgorithm), sometimes only called "the algorithm" or "algorithm"[1]is a subclass ofinformation filtering systemthat provides suggestions for items that are most pertinent to a particular user.[2][3][4]Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.[2][5]Modern recommendation systems such as those used on large social media sites, make extensive use of AI, machine learning and related techniques to learn the behavior and preferences of each user, and tailor their feed accordingly.[6]
Typically, the suggestions refer to variousdecision-making processes, such as what product to purchase, what music to listen to, or what online news to read.[2]Recommender systems are used in a variety of areas, with commonly recognised examples taking the form ofplaylistgenerators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders.[7][8]These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants andonline dating. Recommender systems have also been developed to explore research articles and experts,[9]collaborators,[10]and financial services.[11]
Acontent discovery platformis an implementedsoftwarerecommendationplatformwhich uses recommender system tools. It utilizes usermetadatain order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content towebsites,mobile devicesandset-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles andacademic journalarticles[12]to television.[13]As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.[12]
Recommender systems usually make use of either or bothcollaborative filteringand content-based filtering, as well as other systems such asknowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[14]Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.[15]
The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems,Last.fmandPandora Radio.
Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of thecold startproblem, and is common in collaborative filtering systems.[17][18][19][20][21][22]Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).
Recommender systems are a useful alternative tosearch algorithmssince they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data.
Recommender systems have been the focus of several granted patents,[23][24][25][26][27]and there are more than 50 software libraries[28]that support the development of recommender systems including LensKit,[29][30]RecBole,[31]ReChorus[32]and RecPack.[33]
Elaine Richcreated the first recommender system in 1979, called Grundy.[34][35]She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like.
Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report byJussi Karlgrenat Columbia University,[36]and implemented at scale and worked through in technical reports and publications from 1994 onwards byJussi Karlgren, then atSICS,[37][38]and research groups led byPattie Maesat MIT,[39]Will Hill at Bellcore,[40]andPaul Resnick, also at MIT,[41][5]whose work with GroupLens was awarded the 2010ACM Software Systems Award.
Montaner provided the first overview of recommender systems from an intelligent agent perspective.[42]Adomaviciusprovided a new, alternate overview of recommender systems.[43]Herlocker provides an additional overview of evaluation techniques for recommender systems,[44]andBeelet al. discussed the problems of offline evaluations.[45]Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.[46][47]
One approach to the design of recommender systems that has wide use iscollaborative filtering.[48]Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm,[49]while that of model-based approaches ismatrix factorization (recommender systems).[50]
A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, thek-nearest neighbor(k-NN) approach[51]and thePearson Correlationas first implemented by Allen.[52]
When building a model from a user's behavior, a distinction is often made between explicit andimplicitforms ofdata collection.
Examples of explicit data collection include the following:
Examples ofimplicit data collectioninclude the following:
Collaborative filtering approaches often suffer from three problems:cold start, scalability, and sparsity.[54]
One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized byAmazon.com's recommender system.[56]
Manysocial networksoriginally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends.[2]Collaborative filtering is still used as part of hybrid systems.
Another common approach when designing recommender systems iscontent-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences.[57][58]These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features.
In this system, keywords are used to describe the items, and auser profileis built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots ininformation retrievalandinformation filteringresearch.
To create auser profile, the system mostly focuses on two types of information:
Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is thetf–idfrepresentation (also called vector space representation).[59]The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such asBayesian Classifiers,cluster analysis,decision trees, andartificial neural networksin order to estimate the probability that the user is going to like the item.[60]
A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system.
Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improvedmetadataof items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques includingtext mining,information retrieval,sentiment analysis(see alsoMultimodal sentiment analysis) anddeep learning.[61]
Most recommender systems now use a hybrid approach, combiningcollaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model.[43]Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck inknowledge-basedapproaches.[62]
Netflixis a good example of the use of hybrid recommender systems.[63]The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).
Some hybridization techniques include:
These recommender systems use the interactions of a user within a session[65]to generate recommendations. Session-based recommender systems are used at YouTube[66]and Amazon.[67]These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such asrecurrent neural networks,[65][68]transformers,[69]and other deep-learning-based approaches.[70][71]
The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user.[66][72][73]One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.[74]
Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[75]See this chapter[76]for an extended introduction.
The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue isDRARS, a system which models the context-aware recommendation as abandit problem. This system combines a content-based technique and a contextual bandit algorithm.[77]
Mobile recommender systems make use of internet-accessingsmartphonesto offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.[78]
There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy.[79]Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).
One example of a mobile recommender system are the approaches taken by companies such asUberandLyftto generate driving routes for taxi drivers in a city.[78]This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits.
Generative recommenders (GR) represent an approach that transforms recommendation tasks intosequential transductionproblems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units),[80]high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a customself-attentionapproach instead oftraditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previousTransformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations.
One of the events that energized research in recommender systems was theNetflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[81]
The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:[82]
Predictive accuracy is substantially improved when blending multiple predictors.Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique.Consequently, our solution is an ensemble of many methods.
Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place foundedGravity R&D, a recommendation engine that's active in theRecSys community.[81][83]4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites.
A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on theInternet Movie Database (IMDb).[84]As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and theVideo Privacy Protection Actby releasing the datasets.[85]This, as well as concerns from theFederal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.[86]
Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure theeffectivenessof recommender systems, and compare different approaches, three types ofevaluationsare available: user studies,online evaluations (A/B tests), and offline evaluations.[45]
The commonly used metrics are themean squared errorandroot mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such asprecision and recallorDCGare useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation.[87]However, many of the classic evaluation measures are highly criticized.[88]
Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise.
User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best.
In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such asconversion rateorclick-through rate.
Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.[89]
The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers.[90][91][92][45]For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.[92][93]A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms.[94]Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[95]This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module.[90][96]Researchers have concluded that the results of offline evaluations should be viewed critically.[97]
Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important.
Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to areproducibility crisisin recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW,RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.[110][111][112]More recent work on benchmarking a set of the same methods came to qualitatively very different results[113]whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM,[114]RecSys Challenge.[115]Moreover, neural and deep learning methods are widely used in industry where they are extensively tested.[116][66][67]The topic of reproducibility is not new in recommender systems. By 2011,Ekstrand,Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently".[117]Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions."[118]As a consequence, much research about recommender systems can be considered as not reproducible.[119]Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems.SaidandBellogínconducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used.[120]Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation:[119]"(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."
Artificial intelligence(AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions.[121]The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods.[122]These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions.
Recommendation systems widely adopt AI techniques such asmachine learning,deep learning, andnatural language processing.[123]These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.[citation needed]
Collaborative filtering(CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions.[124]Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C."
There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is calledK-nearest neighbors. The ideas are as follows:
Anartificial neural network(ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons.[125]Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be ablack-boxmodel. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN.
ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience.[123]Following are some examples:
The Two-Tower model is a neural architecture[126]commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks.[127]It consists of two neural networks:
The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such asdot productorcosine similarity, is used to measure relevance between a user and an item.
This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines.
Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine.[128]It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, includinglatent semantic analysis(LSA),singular value decomposition(SVD),latent Dirichlet allocation(LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations.
An emerging market for content discovery platforms is academic content.[129][130]Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research.[12]Though traditional tools academic search tools such asGoogle ScholarorPubMedprovide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors.
Google Scholar provides an 'Updates' tool that suggests articles by using astatistical modelthat takes a researchers' authorized paper and citations as input.[12]Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.[12]
In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead ofpolarizing.[131][132]Examples includePolisand Remesh which have been used around the world to help find more consensus around specific political issues.[132]Twitterhas also used this approach for managing itscommunity notes,[133]whichYouTubeplanned to pilot in 2024.[134][135]Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empoweringdeliberative groupsthat are representative of the platform's users to control the design and implementation of the algorithm.[136]
As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content.[137]Withbroadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well asinternet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.
|
https://en.wikipedia.org/wiki/Content-based_filtering
|
Inmathematics, adiscrete series representationis an irreducibleunitary representationof alocally compact topological groupGthat is a subrepresentation of the leftregular representationofGon L²(G). In thePlancherel measure, such representations have positive measure. The name comes from the fact that they are exactly the representations that occur discretely in the decomposition of the regular representation.
IfGisunimodular, an irreducible unitary representation ρ ofGis in the discrete series if and only if one (and hence all)matrix coefficient
withv,wnon-zero vectors issquare-integrableonG, with respect toHaar measure.
WhenGis unimodular, the discrete series representation has a formal dimensiond, with the property that
forv,w,x,yin the representation. WhenGis compact this coincides with the dimension when the Haar measure onGis normalized so thatGhas measure 1.
Harish-Chandra(1965,1966) classified the discrete series representations of connectedsemisimple groupsG. In particular, such a group has discrete series representations if and only if it has the same rank as amaximal compact subgroupK. In other words, amaximal torusTinKmust be aCartan subgroupinG. (This result required that thecenterofGbe finite, ruling out groups such as the simply connected cover of SL(2,R).) It applies in particular tospecial linear groups; of these onlySL(2,R)has a discrete series (for this, see therepresentation theory of SL(2,R)).
Harish-Chandra's classification of the discrete series representations of a semisimple connected Lie group is given as follows. IfLis theweight latticeof the maximal torusT, a sublattice ofitwheretis the Lie algebra ofT, then there is a discrete series representation for every vectorvof
where ρ is theWeyl vectorofG, that is not orthogonal to any root ofG. Every discrete series representation occurs in this way. Two such vectorsvcorrespond to the same discrete series representation if and only if they are conjugate under theWeyl groupWKof the maximal compact subgroupK. If we fix afundamental chamberfor the Weyl group ofK, then the discrete series representation are in 1:1 correspondence with the vectors ofL+ ρ in this Weyl chamber that are not orthogonal to any root ofG. The infinitesimal character of the highest weight representation is given byv(mod the Weyl groupWGofG) under theHarish-Chandra correspondenceidentifying infinitesimal characters ofGwith points of
So for each discrete series representation, there are exactly
discrete series representations with the same infinitesimal character.
Harish-Chandra went on to prove an analogue for these representations of theWeyl character formula. In the case whereGis not compact, the representations have infinite dimension, and the notion ofcharacteris therefore more subtle to define since it is aSchwartz distribution(represented by a locally integrable function), with singularities.
The character is given on the maximal torusTby
WhenGis compact this reduces to the Weyl character formula, withv=λ+ρforλthe highest weight of the irreducible representation (where the product is over roots α having positive inner product with the vectorv).
Harish-Chandra's regularity theoremimplies that the character of a discrete series representation is a locally integrable function on the group.
Pointsvin the cosetL+ ρ orthogonal to roots ofGdo not correspond to discrete series representations, but those not orthogonal to roots ofKare related to certain irreducible representations calledlimit of discrete series representations. There is such a representation for every pair (v,C) wherevis a vector ofL+ ρ orthogonal to some root ofGbut not orthogonal to any root ofKcorresponding to a wall ofC, andCis a Weyl chamber ofGcontainingv. (In the case of discrete series representations there is only one Weyl chamber containingvso it is not necessary to include it explicitly.) Two pairs (v,C) give the same limit of discrete series representation if and only if they are conjugate under the Weyl group ofK. Just as for discrete series representationsvgives the infinitesimal character. There are at most |WG|/|WK| limit of discrete series representations with any given infinitesimal character.
Limit of discrete series representations aretempered representations, which means roughly that they only just fail to be discrete series representations.
Harish-Chandra's original construction of the discrete series was not very explicit. Several authors later found more explicit realizations of the discrete series.
|
https://en.wikipedia.org/wiki/Discrete_series_representation
|
Inlinguistic morphology, anuninflected wordis awordthat has no morphologicalmarkers(inflection) such asaffixes,ablaut,consonant gradation, etc., indicatingdeclensionorconjugation. If a word has an uninflected form, this is usually the form used as thelemmafor the word.[1]
InEnglishand many otherlanguages, uninflected words includeprepositions,interjections, andconjunctions, often calledinvariable words. These cannot be inflected under any circumstances (unless they are used as different parts of speech, as in "ifs and buts").
Only words that cannot be inflected at all are called "invariable". In the strict sense of the term "uninflected", only invariable words are uninflected, but in broader linguistic usage, these terms are extended to be inflectable words that appear in their basic form. For example, Englishnounsare said to be uninflected in thesingular, while they show inflection in theplural(represented by the affix-s/-es). The term "uninflected" can also refer to uninflectability with respect to one or more, but not all, morphological features; for example, one can say thatJapaneseverbs are uninflected for person and number, but they do inflect for tense, politeness, and several moods and aspects.
In the strict sense, among English nouns onlymass nouns(such assand,information, orequipment) are truly uninflected, since they have only one form that does not change;count nounsare always inflected for number, even if the singular inflection is shown by an "invisible" affix (thenull morpheme). In the same way, English verbs are inflected for person and tense even if the morphology showing those categories is realized as null morphemes. In contrast, otheranalytic languageslikeMandarin Chinesehave true uninflected nouns and verbs, where the notions of number and tense are completely absent.
In manyinflected languages, such asGreekandRussian, some nouns and adjectives of foreign origin are left uninflected in contexts where native words would be inflected; for instance, the nameAbraamin Greek (fromHebrew), the Modern Greek word μπλεble(fromFrenchbleu), theItalianwordcomputer, and theRussianwordsкенгуру,kenguru(kangaroo) andпальто,pal'to(coat, from Frenchpaletot).
InGerman, allmodal particlesare uninflected.[2]
|
https://en.wikipedia.org/wiki/Uninflected_word
|
In theNeo-Griceanapproach tosemanticsandpragmaticschampioned byYalelinguistLaurence Horn, theQ-principle("Q" for "Quantity") is a reformulation ofPaul Grice's maxim of quantity (seeGricean maxims) combined with the first two sub-maxims of manner.[1]The Q-principle states: "Say as much as you can (given R)." As such it interacts with theR-principle, which states: "Say no more than you must (given Q)."[2][3]
The Q-principle leads to theimplicature(or narrowing) that if the speaker did not make a stronger statement (or say more), then its denial is (implied to be) true. For instance, the inference from "He entered a house" to "He did not enter his own house" is Q-based inference, i.e. deriving from the Q-principle.[2]
Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
Thispragmatics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Q-based_narrowing
|
TheEspionage Act of 1917is aUnited States federal lawenacted on June 15, 1917, shortly after the United Statesentered World War I. It has been amended numerous times over the years. It was originally found inTitle 50of the U.S. Code (War & National Defense) but is now found under Title 18 (Crime & Criminal Procedure):18 U.S.C.ch. 37(18 U.S.C.§ 792et seq.).
It was intended to prohibit interference withmilitary operationsorrecruitment, to prevent insubordination in the military, and to prevent the support of enemies of the United States during wartime. In 1919, theSupreme Court of the United Statesunanimously ruled throughSchenck v. United Statesthat the act did not violate thefreedom of speechof those convicted under its provisions. Theconstitutionalityof the law, its relationship to free speech, and the meaning of its language have been contested in court ever since.
Among those charged with offenses under the Act were: Austrian-American socialist congressman and newspaper editorVictor L. Berger; labor leader and five-timeSocialist Party of AmericacandidateEugene V. Debs,anarchistsEmma GoldmanandAlexander Berkman, formerWatch Tower Bible & Tract SocietypresidentJoseph Franklin Rutherford(whose conviction was overturned on appeal),[1]communistsJulius and Ethel Rosenberg,Pentagon PaperswhistleblowerDaniel Ellsberg,CablegatewhistleblowerChelsea Manning,WikiLeaksfounderJulian Assange,Defense Intelligence AgencyemployeeHenry Kyle Frese, andNational Security Agency(NSA) contractor whistleblowerEdward Snowden. Although the most controversial amendments, called theSedition Act of 1918, were repealed on December 13, 1920, the original Espionage Act was left intact.[2]Between 1921 and 1923, PresidentsWarren G. HardingandCalvin Coolidgereleased all those convicted under the Sedition and Espionage Acts.[3]
The Espionage Act of 1917 was passed, along with theTrading with the Enemy Act, just after the United States entered World War I in April 1917. It was based on theDefense Secrets Act of 1911, especially the notions of obtaining or delivering information relating to "national defense" to a person who was not "entitled to have it". The Espionage Act law imposed much stiffer penalties than the 1911 law, including the death penalty.[4]
PresidentWoodrow Wilson, in his December 7, 1915,State of the Unionaddress, asked Congress for the legislation.[5]Congress moved slowly. Even after the U.S. broke diplomatic relations with Germany, when the Senate passed a version on February 20, 1917, the House did not vote before the then-current session of Congress ended. After the declaration of war in April 1917, both houses debated versions of the Wilson administration's drafts that included press censorship.[6]That provision aroused opposition, with critics charging it established a system of "prior restraint" and delegated unlimited power to the president.[7]After weeks of intermittent debate, the Senate removed the censorship provision by a one-vote margin, voting 39 to 38.[8]Wilson still insisted it was needed: "Authority to exercise censorship over the press....is absolutely necessary to the public safety", but signed the Act without the censorship provisions on June 15, 1917,[9]after Congress passed the act on the same day.[10]
Attorney GeneralThomas Watt Gregorysupported passage of the act but viewed it as a compromise. The President's Congressional rivals were proposing to remove responsibility for monitoring pro-German activity, whether espionage or some form of disloyalty, from theDepartment of Justiceto theWar Departmentand creating a form of courts-martial of doubtful constitutionality. The resulting Act was far more aggressive and restrictive than they wanted, but it silenced citizens opposed to the war.[11]Officials in the Justice Department who had little enthusiasm for the law nevertheless hoped that even without generating many prosecutions it would help quiet public calls for more government action against those thought to be insufficiently patriotic.[12]Wilson was denied language in the Act authorizing power to the executive branch for press censorship, but Congress did include a provision to block distribution of print materials through the Post Office.[4]
It made it a crime:
The Act also gave thePostmaster Generalauthority to impound or refuse to mail publications the postmaster determined to violate its prohibitions.[13]
The Act also forbids the transfer of any naval vessel equipped for combat to any nation engaged in a conflict in which the United States is neutral. Seemingly uncontroversial when the Act was passed, this later became a legal stumbling block for the administration ofFranklin D. Roosevelt, when he sought to provide military aid to Great Britain before the United States enteredWorld War II.[14]
The law was extended on May 16, 1918, by the Sedition Act of 1918, actually a set of amendments to the Espionage Act, which prohibited many forms of speech, including "any disloyal, profane, scurrilous, or abusive language about the form of government of the United States ... or the flag of the United States, or the uniform of the Army or Navy".[11]
Because the Sedition Act was an informal name, court cases were brought under the name of the Espionage Act, whether the charges were based on the provisions of the Espionage Act or the provisions of the amendments known informally as the Sedition Act.
On March 3, 1921, the Sedition Act amendments were repealed, but many provisions of the Espionage Act remain, codified underU.S.C.Title 18, Part 1, Chapter 37.[15]
In 1933, after signals intelligence expertHerbert Yardleypublished a popular book about breaking Japanese codes, the Act was amended to prohibit the disclosure of foreign code or anything sent in code.[16]The Act was amended in 1940 to increase the penalties it imposed, and again in 1970.[17]
In the late 1940s, the U.S. Code was re-organized and much of Title 50 (War) was moved to Title 18 (Crime). TheMcCarran Internal Security Actadded18 U.S.C.§ 793(e)in 1950 and18 U.S.C.§ 798was added the same year.[18]
In 1961, CongressmanRichard Poffsucceeded after several attempts in removing language that restricted the Act's application to territory "within the jurisdiction of the United States, on the high seas, and within the United States"18 U.S.C.§ 791. He said the need for the Act to apply everywhere was prompted byIrvin C. Scarbeck, a State Department official who was charged with yielding to blackmail threats inPoland.[19]
In 1989, CongressmanJames Traficanttried to amend18 U.S.C.§ 794to broaden the application of the death penalty.[20]SenatorArlen Specterproposed a comparable expansion of the use of the death penalty the same year.[21]In 1994,Robert K. Dornanproposed the death penalty for the disclosure of a U.S. agent's identity.[22]
Progressive Era
Red Scare
Anti-warandcivil rightsmovements
Contemporary
Defunct
Much of the Act's enforcement was left to the discretion of localUnited States Attorneys, so enforcement varied widely. For example, SocialistKate Richards O'Haregave the same speech in several states but was convicted and sentenced to prison for five years for delivering her speech in North Dakota. Most enforcement activities occurred in the Western states where theIndustrial Workers of the Worldwas active.[23]Finally, a few weeks before the end of the war, the U.S. Attorney General instructed U.S. Attorneys not to act without his approval.
A year after the Act's passage,Eugene V. Debs,Socialist Partypresidential candidate in 1904, 1908, and 1912 was arrested and sentenced to 10 years in prison for making a speech that "obstructed recruiting". He ran for president again in 1920 from prison. President Warren G. Harding commuted his sentence in December 1921 when he had served nearly five years.[24]
InUnited States v. Motion Picture Film(1917), a federal court upheld the government's seizure of a film calledThe Spirit of '76on the grounds that its depiction of cruelty on the part of British soldiers during the American Revolution would undermine support for America's wartime ally. The producer, Robert Goldstein, a Jew of German origins, was prosecuted under Title XI of the Act and received a ten-year sentence plus a fine of $5000. The sentence was commuted on appeal to three years.[25]
Postmaster GeneralAlbert S. Burlesonand those in his department played critical roles in the enforcement of the Act. He held his position because he was a Democratic party loyalist and close to the President and the Attorney General. When the Department of Justice numbered its investigators in the dozens, the Post Office had a nationwide network in place. The day after the Act became law, Burleson sent a secret memo to all postmasters ordering them to keep "close watch on ... matter which is calculated to interfere with the success of ... the government in conducting the war".[26]Postmasters inSavannah, Georgia, andTampa, Florida, refused to mail theJeffersonian, the mouthpiece ofTom Watson, a southern populist, an opponent of the draft, the war, and minority groups. When Watson sought an injunction against the postmaster, the federal judge who heard the case called his publication "poison" and denied his request. Government censors objected to the headline "Civil Liberty Dead".[27]In New York City, the postmaster refused to mailThe Masses, a socialist monthly, citing the publication's "general tenor".The Masseswas more successful in the courts, where JudgeLearned Handfound the Act was applied so vaguely as to threaten "the tradition of English-speaking freedom". The editors were then prosecuted for obstructing the draft, and the publication folded when denied access to the mails again.[28]Eventually, Burleson's vigorous enforcement overreached when he targeted supporters of the administration. The president warned him to exercise "the utmost caution", and the dispute proved the end of their political friendship.[29]
In May 1918, sedition charges were laid under the Espionage Act againstWatch Tower Bible and Tract Societypresident"Judge" Joseph Rutherfordand seven other Watch Tower directors and officers over statements made in the society's book,The Finished Mystery, published a year earlier. According to the bookPreachers Present Armsby Ray H. Abrams, the passage (from page 247) found to be particularly objectionable reads: "Nowhere in the New Testament is patriotism (a narrowly minded hatred of other peoples) encouraged. Everywhere and always murder in its every form is forbidden. And yet under the guise of patriotism civil governments of the earth demand of peace-loving men the sacrifice of themselves and their loved ones and the butchery of their fellows, and hail it as a duty demanded by the laws of heaven."[30]The officers of the Watchtower Society were charged with attempting to cause insubordination, disloyalty, refusal of duty in the armed forces and obstructing the recruitment and enlistment service of the U.S. while it was at war.[31]The book had been banned in Canada since February 1918 for what aWinnipegnewspaper described as "seditious and antiwar statements"[32]and described by Attorney General Gregory as dangerous propaganda.[33]On June 21 seven of the directors, including Rutherford, were sentenced to the maximum 20 years' imprisonment for each of four charges, to be served concurrently. They served nine months in theAtlanta Penitentiarybefore being released on bail at the order of Supreme Court JusticeLouis Brandeis. In April 1919, an appeal court ruled they had not had the "intemperate and impartial trial of which they were entitled" and reversed their conviction.[34]In May 1920 the government announced that all charges had been dropped.[35]
During theRed Scareof 1918–19, in response to the1919 anarchist bombingsaimed at prominent government officials and businessmen, U.S.Attorney GeneralA. Mitchell Palmer, supported byJ. Edgar Hoover, then head of the Justice Department's Enemy Aliens Registration Section, prosecuted several hundred foreign-born known and suspected activists in the United States under theSedition Act of 1918. This extended the Espionage Act to cover a broader range of offenses. After being convicted, persons includingEmma GoldmanandAlexander Berkmanwere deported to the Soviet Union on a ship the press called the "Soviet Ark".[4][36][37]
Many of the jailed had appealed their convictions based on the U.S. constitutional right to the freedom of speech. The Supreme Court disagreed. The Espionage Act limits on free speech were ruled constitutional in the U.S. Supreme Court caseSchenck v. United States(1919).[38]Schenck, an anti-war Socialist, had been convicted of violating the Act when he sent anti-draft pamphlets to men eligible for the draft. Although Supreme Court JusticeOliver Wendell Holmesjoined the Court majority in upholding Schenck's conviction in 1919, he also introduced the theory that punishment in such cases must be limited to such political expression that constitutes a "clear and present danger" to the government action at issue. Holmes' opinion is the origin of the notion that speech equivalent to "falselyshouting fire in a crowded theater" is not protected by the First Amendment.
Justice Holmes began to doubt his decision due to criticism from free speech advocates. He also met the Harvard Law professorZechariah Chafeeand discussed his criticism ofSchenck.[37][39]
Later in 1919, inAbrams v. United States, the Supreme Court upheld the conviction of a man who distributed circulars in opposition to American intervention in Russia following theRussian Revolution. The concept ofbad tendencywas used to justify speech restriction. The defendant was deported. Justices Holmes and Brandeis dissented, the former arguing "nobody can suppose that the surreptitious publishing of a silly leaflet by an unknown man, without more, would present any immediate danger that its opinions would hinder the success of the government arms or have any appreciable tendency to do so".[37][40]
In March 1919, President Wilson, at the suggestion of Attorney GeneralThomas Watt Gregory, pardoned or commuted the sentences of some 200 prisoners convicted under the Espionage Act or the Sedition Act.[41]By early 1921, the Red Scare had faded, Palmer left government, and the Espionage Act fell into relative disuse.
Prosecutions under the Act were much less numerous during World War II than during World War I. The likely reason was not that Roosevelt was more tolerant of dissent than Wilson but rather that the lack of continuing opposition after the Pearl Harbor attack presented far fewer potential targets for prosecutions under the law. Associate JusticeFrank Murphynoted in 1944 inHartzel v. United States: "For the first time during the course of the present war, we are confronted with a prosecution under the Espionage Act of 1917." Hartzel, a World War I veteran, had distributed anti-war pamphlets to associations and business groups. The court's majority found that his materials, though comprising "vicious and unreasoning attacks on one of our military allies, flagrant appeals to false and sinister racial theories, and gross libels of the President", did not urge mutiny or any of the other specific actions detailed in the Act, and that he had targeted molders of public opinion, not members of the armed forces or potential military recruits. The court overturned his conviction in a 5–4 decision. The four dissenting justices declined to "intrude on the historic function of the jury" and would have upheld the conviction.[42]InGorin v. United States(early 1941), the Supreme Court ruled on many constitutional questions surrounding the act.[43]
The Act was used in 1942 to deny a mailing permit to FatherCharles Coughlin's weeklySocial Justice, effectively ending its distribution to subscribers. It was part of Attorney GeneralFrancis Biddle's attempt to close down what he called "vermin publications". Coughlin had been criticized for virulently anti-Semitic writings.[44][45][46]Later, Biddle supported use of the Act to deny mailing permits to bothThe Militant, which was published by theSocialist Workers Party, and theBoise Valley HeraldofMiddleton, Idaho, an anti-New Deal and anti-war weekly. The paper had also criticized wartime racism against African Americans and Japanese internment.[47]
The same year, aJune front-page storybyStanley Johnstonin theChicago Tribune, headlined "Navy Had Word of Jap Plan to Strike at Sea", implied that the Americans had broken the Japanese codes before theBattle of Midway. Before submitting the story, Johnson asked the managing
editor, Loy “Pat” Maloney, and Washington Bureau Chief Arthur Sears
Henning if the content violated the Code of Wartime Practices. They concluded that it was in compliance because the code had said nothing about reporting the movement of enemy ships in enemy waters.[48]
The story resulted in the Japanese changing their codebooks and callsign systems. The newspaper publishers were brought before agrand juryfor possible indictment, but proceedings were halted because of government reluctance to present a jury with the highly secret information necessary to prosecute the publishers.[49][50]In addition, the Navy had failed to provide promised evidence that the story had revealed "confidential information concerning the Battle of Midway". Attorney General Biddle confessed years later that the final result of the case made him feel "like a fool".[48]
In 1945, six associates ofAmerasiamagazine, a journal of Far Eastern affairs, came under suspicion after publishing articles that bore similarity toOffice of Strategic Servicesreports. The government proposed using the Espionage Act against them. It later softened its approach, changing the charge to Embezzlement of Government Property (now18 U.S.C.§ 641). A grand jury cleared three of the associates, two associates paid small fines, and charges against the sixth man were dropped. SenatorJoseph McCarthysaid the failure to aggressively prosecute the defendants was a communist conspiracy. According to Klehr and Radosh, the case helped build his later notoriety.[51]
Navy employee Hafis Salich sold Soviet agent Mihail Gorin information regarding Japanese activities in the late 1930s.Gorin v. United States(1941) was cited in many later espionage cases for its discussion of the charge of "vagueness", an argument made against the terminology used in certain portions of the law, such as what constitutes "national defense" information.
Later in the 1940s, several incidents prompted the government to increase its investigations into Soviet espionage. These included theVenona projectdecryptions, theElizabeth Bentleycase, theatomic spiescases, theFirst LightningSoviet nuclear test, and others. Many suspects were surveilled, but never prosecuted. These investigations were dropped, as seen in theFBI Silvermaster Files. There were also many successful prosecutions and convictions under the Act.
In August 1950,Julius and Ethel Rosenbergwere indicted under Title 50, sections 32a and 34, in connection with giving nuclear secrets to the Soviet Union.Anatoli Yakovlevwas indicted as well. In 1951,Morton SobellandDavid Greenglasswere indicted. After a controversial trial in 1951, the Rosenbergs were sentenced to death. They were executed in 1953.[52][53][54]In the late 1950s, several members of theSoble spy ring, includingRobert Soblen, andJackandMyra Soble, were prosecuted for espionage. In the mid-1960s, the act was used against James Mintkenbaugh andRobert Lee Johnson, who sold information to the Soviets while working for the U.S. Army in Berlin.[55][56]
In 1948, some portions of theUnited States Codewere reorganized. Much of Title 50 (War and National Defense) was moved toTitle 18(Crimes and Criminal Procedure). Thus Title 50 Chapter 4, Espionage, (Sections 31–39), became Title 18, 794 and following. As a result, certain older cases, such as theRosenbergcase, are now listed under Title 50, while newer cases are often listed under Title 18.[52][57]
In 1950, during theMcCarthy Period, Congress passed theMcCarran Internal Security Actover PresidentHarry S. Truman's veto. It modified a large body of law, including espionage law. One addition was793(e), which had almost exactly the same language as793(d). According to Edgar and Schmidt, the added section potentially removes the "intent" to harm or aid requirement. It may make "mere retention" of information a crime no matter the intent, covering even former government officials writing their memoirs. They also describe McCarran saying that this portion was intended directly to respond to the case ofAlger Hissand the "Pumpkin Papers".[18][58][59]
Court decisions of this era changed the standard for enforcing some provisions of the Espionage Act. Though not a case involving charges under the Act,Brandenburg v. Ohio(1969) changed the "clear and present danger" test derived fromSchenckto the "imminent lawless action" test, a considerably stricter test of the inflammatory nature of speech.[60]
In June 1971,Daniel EllsbergandAnthony Russowere charged with afelonyunder the Espionage Act of 1917 because they lacked legal authority to publish classified documents that came to be known as thePentagon Papers.[61]The Supreme Court inNew York Times Co. v. United Statesfound that the government had not made a successful case for prior restraint of Free Speech, but a majority of the justices ruled that the government could still prosecute theTimesand thePostfor violating the Espionage Act in publishing the documents. Ellsberg and Russo were not acquitted of violating the Espionage Act. They were freed due to a mistrial based on irregularities in the government's case.[62]
The divided Supreme Court had denied the government's request to restrain the press. In their opinions, the justices expressed varying degrees of support for the First Amendment claims of the press against the government's "heavy burden of proof" in establishing that the publisher "has reason to believe" the material published "could be used to the injury of the United States or to the advantage of any foreign nation".[63]
The case prompted Harold Edgar and Benno C. Schmidt Jr. to write an article on espionage law in the 1973Columbia Law Review. Their article was entitled "The Espionage Statutes and Publication of Defense Information". Essentially, they found the law poorly written and vague, with parts of it probably unconstitutional. Their article became widely cited in books and in future court arguments on Espionage cases.[63]
United States v. Dedeyanin 1978 was the first prosecution under793(f)(2)(Dedeyan 'failed to report' that information had been disclosed). The courts relied onGorin v. United States(1941) for precedent. The ruling touched on several constitutional questions, including vagueness of the law and whether the information was "related to national defense". The defendant received a 3-year sentence.[64][65]
In 1979–80, Truong Dinh Hung (akaDavid Truong) andRonald Louis Humphreywere convicted under 793(a), (c), and (e) as well as several other laws. The ruling discussed several constitutional questions regarding espionage law, "vagueness", the difference betweenclassified informationand "national defense information", wiretapping and the Fourth Amendment. It also commented on the notion of bad faith (scienter) being a requirement for conviction even under 793(e); an "honest mistake" was said not to be a violation.[65][66]
Alfred Zehe, a scientist fromEast Germany, was arrested in Boston in 1983 after being caught in a government-run sting operation in which he had reviewed classified U.S. government documents in Mexico and East Germany. His attorneys contended without success that the indictment was invalid, arguing that the Espionage Act does not cover the activities of a foreign citizen outside the United States.[67][68]Zehe then pleaded guilty and was sentenced to 8 years in prison. He was released in June 1985 as part of an exchange of four East Europeans held by the U.S. for 25 people held in Poland and East Germany, none of them American.[69]
One of Zehe's defense attorneys claimed his client was prosecuted as part of "the perpetuation of the 'national-security state' by over-classifying documents that there is no reason to keep secret, other than devotion to the cult of secrecy for its own sake".[70]
The media dubbed 1985 "Year of the Spy". U.S. Navy civilianJonathan Pollardwas charged with violating18 U.S.C.§ 794(c), for selling classified information to Israel. His 1986 plea bargain did not get him out of a life sentence, after a 'victim impact statement' including a statement byCaspar Weinberger.[71]Larry Wu-Tai Chin, at CIA, was also charged with violating18 U.S.C.§ 794(c)for selling information to China.[72]Ronald Peltonwas prosecuted for violating18 U.S.C.§ 794(a),794(c), &798(a), for selling information to the Soviets, and interfering withOperation Ivy Bells.[73]Edward Lee Howardwas an ex-Peace Corps and ex-CIA agent charged under17 U.S.C.§ 794(c)for allegedly dealing with the Soviets. The FBI's website says the 1980s was the "decade of the spy", with dozens of arrests.[74]
Seymour Hershwrote an article entitled "The Traitor" arguing against Pollard's release.[75]
Samuel Loring Morisonwas a government security analyst who worked on the side forJane's, a British military and defense publisher. He was arrested on October 1, 1984,[76]though investigators never demonstrated any intent to provide information to a hostile intelligence service. Morison told investigators that he sent classified satellite photographs toJane'sbecause the "public should be aware of what was going on on the other side", meaning that the Soviets' new nuclear-powered aircraft carrier would transform the USSR's military capabilities. He said that "if the American people knew what the Soviets were doing, they would increase the defense budget". British intelligence sources thought his motives were patriotic. American prosecutors emphasized his economic gain and complaints about his government job.[77]
Morison's prosecution was used in a broader campaign against leaks of information as a "test case" for applying the Act to cover the disclosure of information to the press. A March 1984 government report had noted that "the unauthorized publication of classified information is a routine daily occurrence in the U.S." but that the applicability of the Espionage Act to such disclosures "is not entirely clear".[78]Timesaid that the administration, if it failed to convict Morison, would seek additional legislation and described the ongoing conflict: "The Government does need to protect military secrets, the public does need information to judge defense policies, and the line between the two is surpassingly difficult to draw."[78]
On October 17, 1985, Morison was convicted in Federal Court on two counts of espionage and two counts of theft of government property.[78]He was sentenced to two years in prison on December 4, 1985.[79]The Supreme Court declined to hear his appeal in 1988.[80]Morison became "the only [American] government official ever convicted for giving classified information to the press" up to that time.[81]Following SenatorDaniel Patrick Moynihan's 1998 appeal for a pardon for Morison, PresidentBill Clintonpardoned him on January 20, 2001, the last day of his presidency,[81]despite the CIA's opposition to the pardon.[80]
The successful prosecution of Morison was used to warn against the publication of leaked information. In May 1986, CIA DirectorWilliam J. Casey, without citing specific violations of law, threatened to prosecute five news organizations–The Washington Post,The Washington Times,The New York Times,TimeandNewsweek.[82]
Christopher John BoyceofTRW, and his accompliceAndrew Daulton Lee, sold out to the Soviets and went to prison in the 1970s. Their activities were the subject of the movieThe Falcon and the Snowman.
In the 1980s, several members of theWalker spy ringwere prosecuted and convicted of espionage for the Soviets.
In 1980,David Henry Barnettwas the first active CIA officer to be convicted under the act.
In 1994, CIA officerAldrich Ameswas convicted under18 U.S.C.§ 794(c)of spying for the Soviets; Ames had revealed the identities of several U.S. sources in the USSR to the KGB, who were then executed.[83]
FBI agentEarl Edwin Pittswas arrested in 1996 under18 U.S.C.§ 794(a)and18 U.S.C.§ 794(c)of spying for the Soviet Union and later for the Russian Federation.[84][85][86][87]
In 1997, senior CIA officerHarold James Nicholsonwas convicted of espionage for the Russians.
In 1998, NSA contractorDavid Sheldon Boonewas charged with having handed over a 600-page technical manual to the Sovietsc.1988–1991 (18 U.S.C.§ 794(a)).
In 2000, FBI agentRobert Hanssenwas convicted under the Act of spying for the Soviets in the 1980s and Russia in the 1990s.
In the 1990s, SenatorDaniel Patrick Moynihandeplored the "culture of secrecy" made possible by the Espionage Act, noting the tendency of bureaucracies to enlarge their powers by increasing the scope of what is held "secret".[88]
In the late 1990s,Wen Ho LeeofLos Alamos National Laboratory(LANL) was indicted under the Act. He and other national security professionals later said he was a "scapegoat" in the government's quest to determine if information about theW88nuclear warhead had been transferred to China.[89]Lee had madebackupcopies at LANL of his nuclear weapons simulations code to protect it in case of a system crash. The code was markedPARD, sensitive but not classified. As part of aplea bargain, he pleaded guilty to one count under the Espionage Act. The judge apologized to him for having believed the government.[90]Lee later won more than a million dollars in a lawsuit against the government and several newspapers for their mistreatment of him.[89]
In 2001, retiredArmy ReserveColonelGeorge Trofimoff, the most senior U.S. military officer to be indicted under the Act, was convicted of conducting espionage for the Soviets in the 1970s–1990s.[91]
Kenneth Wayne Ford Jr. was indicted under18 U.S.C.§ 793(e)for allegedly having a box of documents in his house after he left NSA employment around 2004. He was sentenced to six years in prison in 2006.[92]
In 2005, Pentagon Iran expertLawrence Franklin and AIPAC lobbyists Steve Rosen and Keith Weissmanwere indicted under the Act. Franklin pleaded guilty to conspiracy to disclose national defense information to the lobbyists and an Israeli government official.[93]Franklin was sentenced to more than 12 years in prison, but the sentence was later reduced to 10 months of home confinement.[94]
Under theObamaandfirst Trump administrations, at least eightEspionage Act prosecutionswere related not to traditionalespionagebut either withholding information or communicating with members of the press. Out of a total of eleven prosecutions under the Espionage Act against government officials accused of providing classified information to the press, seven have occurred since Obama took office.[95]"Leaks related to national security can put people at risk", he said at a news conference in 2013. "They can put men and women in uniform that I've sent into the battlefield at risk. I don't think the American people would expect me, as commander in chief, not to be concerned about information that might compromise their missions or might get them killed."[96]
Some have criticized the use of the Espionage Act against national security leakers. A 2015 study by thePEN American Centerfound that almost all of the non-government representatives they interviewed, including activists, lawyers, journalists, and whistleblowers, "thought the Espionage Act had been used inappropriately in leak cases that have a public interest component". PEN wrote: "Experts described it as 'too blunt an instrument,' 'aggressive, broad and suppressive,' a 'tool of intimidation,' 'chilling of free speech,' and a 'poor vehicle for prosecuting leakers and whistleblowers.'"[150]
Pentagon PaperswhistleblowerDaniel Ellsbergsaid, "the current state of whistleblowing prosecutions under the Espionage Act makes a truly fair trial wholly unavailable to an American who has exposed classified wrongdoing", and that "legal scholars have strongly argued that the US Supreme Court – which has never yet addressed the constitutionality of applying the Espionage Act to leaks to the American public – should find the use of it overbroad and unconstitutional in the absence of a public interest defense".[151]Professor at American UniversityWashington College of Lawand national security law expertStephen Vladeckhas said that the law “lacks the hallmarks of a carefully and precisely defined statutory restriction on speech".[150]Trevor Timm, executive director of theFreedom of the Press Foundation, said, "basically any information the whistleblower or source would want to bring up at trial to show that they are not guilty of violating the Espionage Act the jury would never hear. It's almost a certainty that because the law is so broadly written that they would be convicted no matter what."[150]AttorneyJesselyn Radack, who has represented four whistleblowers charged under the Espionage Act, notes that the law was enacted "35 years before the word 'classification' entered the government's lexicon" and believes that "under the Espionage Act, no prosecution of a non-spy can be fair or just".[152]She added that mounting a legal defense to the Espionage Act is estimated to "cost $1 million to $3 million".[152]In May 2019, thePittsburgh Post-Gazetteeditorial board published an opinion piece making the case for an amendment to allow a public-interest defense, as "the act has since become a tool of suppression, used to punish whistleblowers who expose governmental wrongdoing and criminality".[153]
In an interview withFairness & Accuracy in Reporting, journalist Chip Gibbons said that it was "almost impossible, if not impossible, to mount a defense" against charges under the Espionage Act. Gibbons said defendants are not allowed to use the term "whistleblower", mention theFirst Amendment, raise the issue of over-classification of documents, or explain the reasons for their actions.[137]
|
https://en.wikipedia.org/wiki/Espionage_Act_of_1917
|
In thephilosophy of scienceand some other branches ofphilosophy, a "natural kind" is an intellectual grouping, or categorizing of things, that is reflective of the actual world and not just human interests.[1]Some treat it as a classification identifying some structure of truth and reality that exists whether or not humans recognize it. Others treat it as intrinsically useful to the human mind, but not necessarily reflective of something more objective. Candidate examples of natural kinds are found in all the sciences, but the field ofchemistryprovides the paradigm example ofelements.Alexander Birdand Emma Tobin see natural kinds as relevant tometaphysics,epistemology, and thephilosophy of language, as well as the philosophy of science.[1]
John Deweyheld a view that belief in unconditional natural kinds is a mistake, a relic of obsolete scientific practices.[2]: 419–24Hilary Putnamrejects descriptivist approaches to natural kinds with semantic reasoning.Hasok Changand Rasmus Winther hold the emerging view that natural kinds are useful and evolving scientific facts.
In 1938,John DeweypublishedLogic: The Theory of Inquiry, where he explained how modern scientists create kinds through induction and deduction, and why they have no use for natural kinds.
Dewey argued that modern scientists do not follow Aristotle in treating inductive and deductive propositions as facts already known about nature's stable structure. Today, scientific propositions are intermediate steps in inquiry, hypotheses about processes displaying stable patterns. Aristotle's generic and universal propositions have become conceptual tools of inquiry warranted by inductive inclusion and exclusion of traits. They are provisional means rather than results of inquiry revealing the structure of reality.
Modern induction starts with a question to be answered or a problem to be solved. It identifies problematic subject-matter and seeks potentially relevant traits and conditions. Generic existential data thus identified are reformulated—stated abstractly as if-then universal relations capable of serving as answers or solutions: IfH2O{\displaystyle H_{2}O}, then water. For Dewey, induction creates warranted kinds by observing constant conjunction of relevant traits.
Dewey used the example of "morning dew" to describe these abstract steps creating scientific kinds. From antiquity, the common-sense belief had been that all dew is a kind of rain, meaning dew drops fall. By the early 1800s the curious absence of rain before dew and the growth of understanding led scientists to examine new traits. Functional processes changing bodies [kinds] from solid to liquid to gas at different temperatures, and operational constants of conduction and radiation, led to new inductive hypotheses "directly suggested bythissubject-matter, not by any data [kinds] previously observable. ... There were certain [existential] conditions postulated in the content of the new [non-existential] conception about dew, and it had to be determined whether these conditions were satisfied in theobservablefacts of the case."[2]: 430
After demonstrating that dew could be formed by these generic existential phenomena, and not by other phenomena, the universal hypothesis arose that dew forms following established laws of temperature and pressure. "The outstanding conclusion is that inductive procedures are those whichprepareexistential material so that it has convincing evidential weight with respect to an inferred generalization.[2]: 432Existential data are not pre-known natural kinds, but become conceptual statements of "natural" processes.
Dewey concluded that nature is not a collection of natural kinds, but rather of reliable processes discoverable by competent induction and deduction. He replaced the ambiguous label "natural kind" with "warranted assertion" to emphasize the conditional nature of all human knowings. Assuming kinds to be given unconditional knowings leads to the error of assuming that conceptual universal propositions can serve as evidence for generic propositions; observed consequences affirm unobservable imagined causes. "For an 'inference' that is notgroundedin the evidential nature of the material from which it is drawn isnotan inference. It is a more or less wild guess."[2]: 428Modern induction is not a guess about natural kinds, but a means to create instrumental understanding.
In 1969,Willard Van Orman Quinebrought the term "natural kind" into contemporary analytic philosophy with an essay bearing that title.[3]: 1His opening paragraph laid out his approach in three parts. First, it questioned the logical and scientific legitimacy of reasoning inductively by counting a few examples posting traits imputed to all members of a kind: "What tends to confirm an induction?" For Quine, induction reveals warranted kinds by repeated observation of visible similarities.
Second, it assumed that color can be a characteristic trait of natural kinds, despite some logical puzzles: hypothetical colored kinds such as non-black non-ravens and green-blue emeralds. Finally, it suggested that human psychological structure can explain the illogical success of induction: "an innate flair that we have for natural kinds".[4]: 41
He started with the logical hypothesis that, if all ravens are black—an observable natural kind—then non-black non-ravens are equally a natural kind: "... each [observed] black raven tends to confirm the law [universal proposition] that all ravens are black ..." Observing shared generic traits warrants the inductive universal prediction that future experience will confirm the sharing: "And every reasonable [universal] expectation depends on resemblance of [generic] circumstances, together with our tendency to expect similar causes to have similar effects." "The notion of a kind and the notion of similarity or resemblance seem to be variants or adaptations of a single [universal] notion. Similarity is immediately definable in terms of kind; for things are similar when they are two of a kind."[4]: 42
Quine posited an intuitive human capacity to recognize criteria for judging degrees of similarity among objects, an "innate flair for natural kinds”. These criteria work instrumentally when applied inductively: "... why does our innate subjective spacing [classification] of [existential] qualities accord so well with the functionally relevant [universal] groupings in nature as to make our inductions tend to come out right?"
He admitted that generalizing after observing a few similarities is scientifically and logically unjustified. The numbers and degrees of similarities and differences humans experience are infinite. But the method is justified by its instrumental success in revealing natural kinds. The "problem of induction" is how humans "should stand better than random or coin-tossing chances of coming out right when we predict by inductions which are based on our innate, scientifically unjustified similarity standards."[4]: 48–9
Quine credited human ability to recognize colors as natural kinds to the evolutionary function of color in human survival—distinguishing safe from poisonous kinds of food. He recognized that modern science often judges color similarities to be superficial, but denied that equating existential similarities with abstract universal similarities makes natural kinds any less permanent and important. The human brain's capacity to recognize abstract kinds joins the brain's capacity to recognize existential similarities.
Quine argued that the success of innate and learned criteria for classifying kinds on the basis of similarities observed in small samples of kinds, constitutes evidence of the existence of natural kinds; observed consequences affirm imagined causes. His reasoning continues to provoke philosophical debates.
In 1975,Hilary Putnamrejected descriptivist ideas about natural kind by elaborating on semantic concepts in language.[5][6]Putnam explains his rejection of descriptivist and traditionalist approaches to natural kinds with semantic reasoning, and insists that natural kinds can not be thought of via descriptive processes or creating endless lists of properties.
In Putnam'sTwin Earth thought experiment, one is asked to consider the extension of "water" when confronted with an alternate version of "water" on an imagined "Twin Earth". This "water" is composed of chemical XYZ, as opposed to H2O. However, in all other describable aspects, it is the same as Earth’s "water." Putnam argues that the mere descriptions of an object, such as "water", is insufficient in defining natural kind. There are underlying aspects, such as chemical composition, that may go unaccounted for unless experts are consulted. This information provided by experts is what Putnam argues will ultimately define natural kinds.[6]
Putnam calls the essential information used to define natural kind "core facts." This discussion arises in part in response to what he refers to as "Quine’s pessimism" of theory of meaning. Putnam claims that a natural kind can be referred to via its associated stereotype. This stereotype must be a normal member of the category, and is itself defined by core facts as determined by experts. By conveying these core facts, the essential and appropriate use of natural kind terms can be conveyed.[7]
The process of conveying core facts to communicate the essence and appropriate term of a natural kind term is shown in Putnam's example of describing a lemon and a tiger. With a lemon, it is possible to communicate the stimulus-meaning of what a lemon is by simply showing someone a lemon. In the case of a tiger, on the other hand, it is considerably more complicated to show someone a tiger, but a speaker can just as readily explain what a tiger is by communicating its core facts. By conveying the core facts of a tiger (e.g. big cat, four legs, orange, black stripes, etc.), the listener can, in theory, go on to use the word "tiger" correctly and refer to its extension accurately.[7]
In 1993,Hilary Kornblithpublished a review of debates about natural kinds since Quine had launched that epistemological project a quarter-century earlier. He evaluated Quine's "picture of natural knowledge" as natural kinds, along with subsequent refinements.[3]: 1
He found still acceptable Quine's original assumption that discovering knowledge of mind-independent reality depends on inductive generalisations based on limited observations, despite its being illogical. Equally acceptable was Quine's further assumption that instrumental success of inductive reasoning confirms both the existence of natural kinds and the legitimacy of the method.
Quine's assumption of an innate human psychological process—"standard of similarity," "subjective spacing of qualities"—also remained unquestioned. Kornbluth strengthened this assumption with new labels for the necessary cognitive qualities: "native processes of belief acquisition," "the structure of human conceptual representation," "native inferential processes," "reasonably accurate detectors of covariation."[4]: 3, 9. 95"To my mind, the primary case to be made for the view that our [universal] psychological processes dovetail with the [generic] causal structure of the world comes ... from the success of science.[4]: 3
Kornblith denied that this logic makes human classifications the same as mind-independent classifications: "The categories of modern science, of course, are not innate."[4]: 81But he offered no explanation of how kinds that work conditionally can be distinguished from mind-independent unchanging kinds.
.
Kornblith didn't explain how tedious modern induction accurately generalizes from a few generic traits to all of some universal kind. He attributed such success to individual sensitivity that a single case is representative of all of a kind.
Accepting intuition as a legitimate ground for inductive inferences from small samples, Kornblith criticized popular arguments by Amos Tversky and Daniel Kahneman that intuition is irrational. He continued to argue that traditional induction explains the success of modern science.
Hasok Changand Rasmus Winther contributed essays to a collection entitledNatural Kinds and Classification in Scientific Practice, published in 2016. The editor of the collection, Catherine Kendig, argued for a modern meaning of natural kinds, rejecting Aristotelian classifications of objects according to their "essences, laws, sameness relations, fundamental properties ... and how these map out the ontological space of the world." She thus dropped the traditional supposition that natural kinds exist permanently and independently of human reasoning. She collected original works examining results of discipline-specific classifications of kinds: "the empirical use of natural kinds and what I dub 'activities of natural kinding' and 'natural kinding practices'."[8]: 1–3Her natural kinds include scientific disciplines themselves, each with its own methods of inquiry and classifications or taxonomies..
Chang's contribution displayed Kendig's "natural kinding activities" or "practice turn" by reporting classifications in the mature discipline of chemistry—a field renowned for examples of timeless natural kinds: "All water is H2O;" "All gold has atomic number 79."
He explicitly rejected Quine's basic assumption that natural kinds are real generic objects. "When I speak of a (natural) kind in this chapter, I am referring to a [universal] classificatory concept, rather than a collection of objects." His kinds result from humanity's continuous knowledge-seeking activities called science and philosophy. "Putting these notions more unambiguously in terms of concepts rather than objects, I maintain: if we hit upon some stable and effective classificatory concepts in our inquiry, we should cherish them (calling them 'natural kinds' would be one clear way of doing so), but without presuming that we have thereby found some eternal essences.[8]: 33–4
He also rejected the position taken by Bird and Tobin in our third quote above. "Alexander Bird and Emma Tobin’s succinct characterization of natural kinds is helpful here, as a foil: ‘to say that a kind isnaturalis to say that it corresponds to a grouping or ordering that does not depend on humans’. My view is precisely the opposite, to the extent that scientific inquiry does depend on humans."[8]: 42–3
For Chang, induction creates conditionally warranted kinds by "epistemic iteration"—refining classifications developmentally to reveal how constant conjunctions of relevant traits work: "fundamental classificatory concepts become refined and corrected through our practical scientific engagement with nature. Any considerable and lasting [instrumental] success of such engagement generates confidence in the classificatory concepts used in it, and invites us to consider them as 'natural'."[8]: 34
Among other examples, Chang reported the inductive iterative process by which chemists gradually redefined the kind "element". The original hypothesis was that anything that cannot be decomposed by fire or acids is an element. Learning that some chemical reactions are reversible led to the discovery of weight as a constant through reactions. And then it was discovered that some reactions involve definite and invariable weight ratios, refining understanding of constant traits. "Attempts to establish and explain the combining-weight regularities led to the development of the chemical atomic theory by John Dalton and others. ... Chemical elements were later redefined in terms of atomic number (the number of protons in the nucleus)."[8]: 38–9
Chang claimed his examples of classification practices in chemistry confirmed the fallacy of the traditional assumption that natural kinds exist as mind-independent reality. He attributed this belief more to imagining supernatural intervention in the world, than to illogical induction. He did not consider the popular belief that innate psychological capacities enable traditional induction to work. "Much natural-kind talk has been driven by an intuitive metaphysical essentialism that concerns itself with an objective [generic] order of nature whose [universal] knowledge could, ironically, only be obtained by a supernatural being. Let us renounce such an unnatural notion of natural kinds. Instead, natural kinds should be conceived as something we humans may succeed in inventing and improving through scientific practice."[8]: 44
Rasmus Winther's contribution toNatural Kinds and Classification in Scientific Practicegave new meaning to natural objects and qualities in the nascent discipline of Geographic Information Science (GIS). This "inter-discipline" engages in discovering patterns in—and displaying spatial kinds of—data, using methods that make its results unique natural kinds. But it still creates kinds using induction to identify instrumental traits.
"Collecting and collating geographical data, building geographical data-bases, and engaging in spatial analysis, visualization, and map-making all require organizing, typologizing, and classifying geographic space, objects, relations, and processes. I focus on the use of natural kinds ..., showing how practices of making and using kinds are contextual, fallible, plural, and purposive. The rich family of kinds involved in these activities are here baptized mapping kinds."[8]: 197
He later identified sub-kinds of mapping kinds as "calibrating kinds," "feature kinds," and "object kinds" of "data model types."[8]: 202–3
Winther identified "inferential processes of abstraction and generalization" as methods used by GIS, and explained how they generate digital maps. He illustrated two kinds of inquiry procedures, with sub-procedures to organize data. They are reminiscent of Dewey's multiple steps in modern inductive and deductive inference.[8]: 205Methods for transforming generic phenomena into kinds involve reducing complexity, amplifying, joining, and separating. Methods for selecting among generic kinds involves elimination, classification, and collapse of data. He argued that these methods for mapping kinds can be practiced in other disciplines, and briefly considered how they might harmonize three conflicting philosophical perspectives on natural kinds.
Some philosophers believe there can be a "pluralism" of kinds and classifications. They prefer to speak of "relevant" and "interesting" kinds rather than eternal "natural" kinds. They may be called social constructivists whose kinds are human products. Chang's conclusions that natural kinds are human-created and instrumentally useful would appear to put him in this group.
Other philosophers, including Quine, examine the role of kinds in scientific inference. Winther does not examine Quine's commitment to traditional induction generalizing from small samples of similar objects. But he does accept Quine's willingness to call human-identified kinds that work natural.
"Quine holds that kinds are "functionally relevant groupings in nature" whose recognition permits our inductions to "tend to come out right." That is, kinds ground fallible inductive inferences and predictions, so essential to scientific projects including those of GIS and cartography."[8]: 207
Finally, Winther identified a philosophical perspective seeking to reconstruct rather than reject belief in natural kinds. He placed Dewey in this group, ignoring Dewey's rejection of the traditional label in favor of "warranted assertions".
"Dewey resisted the standard view of natural kinds, inherited from the Greeks ... Instead, Dewey presents an analysis of kinds (and classes and universals) as fallible and context-specific hypotheses permitting us to address problematic situations effectively."[8]: 208Winther concludes that classification practices used in Geographic Information Science are able to harmonize these conflicting philosophical perspectives on natural kinds.
"GIS and cartography suggest that kinds are simultaneously discovered [as pre-existing structures] and constructed [as human classifications]. Geographic features, processes, and objects are of course real. Yet we must structure them in our data models and, subsequently, select and transform them in our maps. Realism and (social) constructivism are hence not exclusive in this field."[8]: 209
|
https://en.wikipedia.org/wiki/Natural_kind
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.