text
stringlengths
16
5.45k
source
stringlengths
32
121
In the mathematical theory ofMarkov chains, theMarkov chain tree theoremis an expression for thestationary distributionof a Markov chain with finitely many states. It sums up terms for the rootedspanning treesof the Markov chain, with a positive combination for each tree. The Markov chain tree theorem is closely related toKirchhoff's theoremon counting the spanning trees of a graph, from which it can be derived.[1]It was first stated byHill (1966), for certain Markov chains arising inthermodynamics,[1][2]and proved in full generality byLeighton & Rivest (1986), motivated by an application in limited-memory estimation of the probability of abiased coin.[1][3] A finite Markov chain consists of a finite set of states, and a transition probabilitypi,j{\displaystyle p_{i,j}}for changing from statei{\displaystyle i}to statej{\displaystyle j}, such that for each state the outgoing transition probabilities sum to one. From an initial choice of state (which turns out to be irrelevant to this problem), each successive state is chosen at random according to the transition probabilities from the previous state. A Markov chain is said to be irreducible when every state can reach every other state through some sequence of transitions, and aperiodic if, for every state, the possible numbers of steps in sequences that start and end in that state havegreatest common divisorone. An irreducible and aperiodic Markov chain necessarily has a stationary distribution, a probability distribution on its states that describes the probability of being on a given state after many steps, regardless of the initial choice of state.[1] The Markov chain tree theorem considers spanning trees for the states of the Markov chain, defined to betrees, directed toward a designated root, in which all directed edges are valid transitions of the given Markov chain. If a transition from statei{\displaystyle i}to statej{\displaystyle j}has transition probabilitypi,j{\displaystyle p_{i,j}}, then a treeT{\displaystyle T}with edge setE(T){\displaystyle E(T)}is defined to have weight equal to the product of its transition probabilities:w(T)=∏(i,j)∈E(T)pi,j.{\displaystyle w(T)=\prod _{(i,j)\in E(T)}p_{i,j}.}LetTi{\displaystyle {\mathcal {T}}_{i}}denote the set of all spanning trees having statei{\displaystyle i}at their root. Then, according to the Markov chain tree theorem, the stationary probabilityπi{\displaystyle \pi _{i}}for statei{\displaystyle i}is proportional to the sum of the weights of the trees rooted ati{\displaystyle i}. That is,πi=1Z∑T∈Tiw(T),{\displaystyle \pi _{i}={\frac {1}{Z}}\sum _{T\in {\mathcal {T}}_{i}}w(T),}where the normalizing constantZ{\displaystyle Z}is the sum ofw(T){\displaystyle w(T)}over all spanning trees.[1]
https://en.wikipedia.org/wiki/Markov_chain_tree_theorem
Acode caveis a series of unused bytes in aprocess's memory. The code cave inside a process's memory is often a reference to a section that has capacity for injecting custom instructions. The concept of a code cave is often employed byhackersandreverse engineersto executearbitrary codein a compiled program. It can be a helpful method to make modifications to a compiled program in the example of including additional dialog boxes, variable modifications or even the removal of software key validation checks. Often using a call instruction commonly found on manyCPU architectures, the code jumps to the new subroutine and pushes the next address onto the stack. After execution of the subroutine a return instruction can be used to pop the previous location off of the stack into the program counter. This allows the existing program to jump to the newly added code without making significant changes to the program flow itself.
https://en.wikipedia.org/wiki/Code_cave
Below are abbreviations used inaviation,avionics,aerospace, andaeronautics. CAT I Enhanced: Allows for lower minimums than CAT I in some cases to CAT 2 minimumsCAT II: Operational performance Category IICAT IIIa: Operational performance Category IIIaCAT IIIb: Operational performance Category IIIbCAT IIIc: Operational performance Category IIIc Sources
https://en.wikipedia.org/wiki/Acronyms_and_abbreviations_in_avionics
Incoding theory, theLee distanceis adistancebetween twostringsx1x2…xn{\displaystyle x_{1}x_{2}\dots x_{n}}andy1y2…yn{\displaystyle y_{1}y_{2}\dots y_{n}}of equal lengthnover theq-aryalphabet{0, 1, …,q− 1} of sizeq≥ 2. It is ametric[1]defined as∑i=1nmin(|xi−yi|,q−|xi−yi|).{\displaystyle \sum _{i=1}^{n}\min(|x_{i}-y_{i}|,\,q-|x_{i}-y_{i}|).}Ifq= 2orq= 3the Lee distance coincides with theHamming distance, because both distances are 0 for two single equal symbols and 1 for two single non-equal symbols. Forq> 3this is not the case anymore; the Lee distance between single letters can become bigger than 1. However, there exists aGray isometry(weight-preserving bijection) betweenZ4{\displaystyle \mathbb {Z} _{4}}with the Lee weight andZ22{\displaystyle \mathbb {Z} _{2}^{2}}with theHamming weight.[2] Considering the alphabet as the additive groupZq, the Lee distance between two single lettersx{\displaystyle x}andy{\displaystyle y}is the length of shortest path in theCayley graph(which is circular since the group is cyclic) between them.[3]More generally, the Lee distance between two strings of lengthnis the length of the shortest path between them in the Cayley graph ofZqn{\displaystyle \mathbf {Z} _{q}^{n}}. This can also be thought of as thequotient metricresulting from reducingZnwith theManhattan distancemodulo thelatticeqZn. The analogous quotient metric on a quotient ofZnmodulo an arbitrary lattice is known as aMannheim metricorMannheim distance.[4][5] Themetric spaceinduced by the Lee distance is a discrete analog of theelliptic space.[1] Ifq= 6, then the Lee distance between 3140 and 2543 is1 + 2 + 0 + 3 = 6. The Lee distance is named after William Chi Yuan Lee (李始元). It is applied for phasemodulationwhile the Hamming distance is used in case of orthogonal modulation. TheBerlekamp codeis an example of code in the Lee metric.[6]Other significant examples are thePreparata codeandKerdock code; these codes are non-linear when considered over a field, but arelinear over a ring.[2]
https://en.wikipedia.org/wiki/Lee_distance
This article details the varioustablesreferenced in theData Encryption Standard(DES)block cipher. All bits and bytes are arranged inbig endianorder in this document. That is, bit number 1 is always the most significant bit. This table specifies the input permutation on a 64-bit block. The meaning is as follows: the first bit of the output is taken from the 58th bit of the input; the second bit from the 50th bit, and so on, with the last bit of the output taken from the 7th bit of the input. This information is presented as a table for ease of presentation; it is a vector, not a matrix. The final permutation is the inverse of the initial permutation; the table is interpreted similarly. The expansion function is interpreted as for the initial and final permutations. Note that some bits from the input are duplicated at the output; e.g. the fifth bit of the input is duplicated in both the sixth and eighth bit of the output. Thus, the 32-bit half-block is expanded to 48 bits. The P permutation shuffles the bits of a 32-bit half-block. The "Left" and "Right" halves of the table show which bits from the inputkeyform the left and right sections of the key schedule state. Note that only 56 bits of the 64 bits of the input are selected; the remaining eight (8, 16, 24, 32, 40, 48, 56, 64) were specified for use asparity bits. This permutation selects the 48-bit subkey for each round from the 56-bit key-schedule state. This permutation will ignore 8 bits below: Permuted Choice 2 "PC-2" ignored bits 9, 18, 22, 25, 35, 38, 43, 54. This table lists the eight S-boxes used in DES. Each S-box replaces a 6-bit input with a 4-bit output. Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits, and the column using the inner four bits. For example, an input "011011" has outer bits "01" and inner bits "1101"; noting that the first row is "00" and the first column is "0000", the corresponding output for S-box S5would be "1001" (=9), the value in the second row, 14th column. (SeeS-box). The main key supplied from user is of 64 bits. The following operations are performed with it. Drop the bits of the grey positions (8x) to make 56 bit space for further operation for each round. After that bits are permuted according to the following table, The table is row major way, means, ActualBit position= Substitute with the bit ofrow * 8 + column. Before the round sub-key is selected, each half of the key schedule state is rotated left by a number of places. This table specifies the number of places rotated. • The key is divided into two 28-bit parts • Each part is shifted left (circular) one or two bits • After shifting, two parts are then combined to form a 56 bit temp-key again • The compression P-box changes the 56 bits key to 48 bits key, which is used as a key for the corresponding round. The table is row major way, means, ActualBit position= Substitute with the bit ofrow * 8 + column. After this return the Round-Key of 48 bits to the called function,i.e.the Round.
https://en.wikipedia.org/wiki/DES_supplementary_material
William H. Inmon(born 1945) is an Americancomputer scientist, recognized by many as the father of thedata warehouse.[1][2]Inmon wrote the first book, held the first conference (withArnie Barnett), wrote the first column in a magazine and was the first to offer classes in data warehousing. Inmon created the accepted definition of what a data warehouse is - a subject-oriented, non-volatile, integrated, time-variant collection of data insupport of management's decisions. Compared with the approach of the other pioneering architect of data warehousing,Ralph Kimball, Inmon's approach is often characterized as a top-down approach. William H. Inmon was born July 20, 1945, inSan Diego, California. He received hisBachelor of Sciencedegree inmathematicsfromYale Universityin 1967, and hisMaster of Sciencedegree incomputer sciencefromNew Mexico State University. He worked forAmerican Management SystemsandCoopers & Lybrandbefore 1991, when he founded the company Prism Solutions, which he took public. In 1995 he founded Pine Cone Systems, which was renamed Ambeo later on. In 1999, he created a corporate information factory web site for his consulting business.[3] Inmon coined terms such as the government information factory, as well as data warehousing 2.0. Inmon promotes building, usage, and maintenance of data warehouses and related topics. His books include "Building the Data Warehouse" (1992, with later editions) and "DW 2.0: The Architecture for the Next Generation of Data Warehousing" (2008). In July 2007, Inmon was named byComputerworldas one of the ten people that most influenced the first 40 years of the computer industry.[4] Inmon's association with data warehousing stems from the fact that he wrote the first[5]book on data warehousing he held the first conference on data warehousing (with Arnie Barnett), he wrote the first column in a magazine on data warehousing, he has written over 1,000 articles on data warehousing in journals and newsletters, he created the first fold out wall chart for data warehousing and he conducted the first classes on data warehousing. In 2012, Inmon developed and made public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of TextualETL. Inmon owns and operates Forest Rim Technology, a company that applies and implements data warehousing solutions executed through textual disambiguation and TextualETL.[6] Bill Inmon has published more than 60 books in nine languages and 2,000 articles on data warehousing and data management.
https://en.wikipedia.org/wiki/Bill_Inmon
The following is a list of notableunsolved problemsgrouped into broad areas ofphysics.[1] Some of the major unsolved problems inphysicsare theoretical, meaning that existingtheoriesseem incapable of explaining a certain observedphenomenonor experimental result. The others are experimental, meaning that there is a difficulty in creating anexperimentto test a proposed theory or investigate a phenomenon in greater detail. There are still some questionsbeyond the Standard Model of physics, such as thestrong CP problem,neutrino mass,matter–antimatter asymmetry, and the nature ofdark matteranddark energy.[2][3]Another problem lies within themathematical frameworkof theStandard Modelitself—the Standard Model is inconsistent with that ofgeneral relativity, to the point that one or both theories break down under certain conditions (for example within knownspacetimesingularitieslike theBig Bangand thecentresofblack holesbeyond theevent horizon).[4] Origin of Cosmic Magnetic Fields Observations reveal that magnetic fields are present throughout the universe, from galaxies to galaxy clusters. However, the mechanisms that generated these large-scale cosmic magnetic fields remain unclear. Understanding their origin is a significant unsolved problem in astrophysics.[65]
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_physics
Task parallelism(also known asfunction parallelismandcontrol parallelism) is a form ofparallelizationofcomputer codeacross multipleprocessorsinparallel computingenvironments. Task parallelism focuses on distributingtasks—concurrently performed byprocessesorthreads—across different processors. In contrast todata parallelismwhich involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.[1]A common type of task parallelism ispipelining, which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others. In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but this is not a requirement. Communication usually takes place by passing data from one thread to the next as part of aworkflow.[2] As a simple example, if a system is running code on a 2-processor system (CPUs"a" & "b") in aparallelenvironment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task "B" simultaneously, thereby reducing therun timeof the execution. The tasks can be assigned usingconditional statementsas described below. Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data (data parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.[3] Thread-level parallelism(TLP) is theparallelisminherent in an application that runs multiplethreadsat once. This type of parallelism is found largely in applications written for commercialserverssuch as databases. By running many threads at once, these applications are able to tolerate the high amounts of I/O and memory system latency their workloads can incur - while one thread is delayed waiting for a memory or disk access, other threads can do useful work. The exploitation of thread-level parallelism has also begun to make inroads into the desktop market with the advent ofmulti-coremicroprocessors. This has occurred because, for various reasons, it has become increasingly impractical to increase either the clock speed or instructions per clock of a single core. If this trend continues, new applications will have to be designed to utilize multiple threads in order to benefit from the increase in potential computing power. This contrasts with previous microprocessor innovations in which existing code was automatically sped up by running it on a newer/faster computer. Thepseudocodebelow illustrates task parallelism: The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows. Code executed by CPU "a": Code executed by CPU "b": This concept can now be generalized to any number of processors. Task parallelism can be supported in general-purpose languages by either built-in facilities or libraries. Notable examples include: Examples of fine-grained task-parallel languages can be found in the realm ofHardware Description LanguageslikeVerilogandVHDL.
https://en.wikipedia.org/wiki/Task_parallelism
Geometric group theoryis an area inmathematicsdevoted to the study offinitely generated groupsvia exploring the connections betweenalgebraicproperties of suchgroupsandtopologicalandgeometricproperties of spaces on which these groups canactnon-trivially (that is, when the groups in question are realized as geometric symmetries or continuous transformations of some spaces). Another important idea in geometric group theory is to consider finitely generated groups themselves as geometric objects. This is usually done by studying theCayley graphsof groups, which, in addition to thegraphstructure, are endowed with the structure of ametric space, given by the so-calledword metric. Geometric group theory, as a distinct area, is relatively new, and became a clearly identifiable branch of mathematics in the late 1980s and early 1990s. Geometric group theory closely interacts withlow-dimensional topology,hyperbolic geometry,algebraic topology,computational group theoryanddifferential geometry. There are also substantial connections withcomplexity theory,mathematical logic, the study ofLie groupsand their discrete subgroups,dynamical systems,probability theory,K-theory, and other areas of mathematics. In the introduction to his bookTopics in Geometric Group Theory,Pierre de la Harpewrote: "One of my personal beliefs is that fascination with symmetries and groups is one way of coping with frustrations of life's limitations: we like to recognize symmetries which allow us to recognize more than what we can see. In this sense the study of geometric group theory is a part of culture, and reminds me of several things thatGeorges de Rhampracticed on many occasions, such as teaching mathematics, recitingMallarmé, or greeting a friend".[1]: 3 Geometric group theory grew out ofcombinatorial group theorythat largely studied properties ofdiscrete groupsvia analyzinggroup presentations, which describe groups asquotientsoffree groups; this field was first systematically studied byWalther von Dyck, student ofFelix Klein, in the early 1880s,[2]while an early form is found in the 1856icosian calculusofWilliam Rowan Hamilton, where he studied theicosahedral symmetrygroup via the edge graph of thedodecahedron. Currently combinatorial group theory as an area is largely subsumed by geometric group theory. Moreover, the term "geometric group theory" came to often include studying discrete groups using probabilistic,measure-theoretic, arithmetic, analytic and other approaches that lie outside of the traditional combinatorial group theory arsenal. In the first half of the 20th century, pioneering work ofMax Dehn,Jakob Nielsen,Kurt ReidemeisterandOtto Schreier,J. H. C. Whitehead,Egbert van Kampen, amongst others, introduced some topological and geometric ideas into the study of discrete groups.[3]Other precursors of geometric group theory includesmall cancellation theoryandBass–Serre theory. Small cancellation theory was introduced byMartin Grindlingerin the 1960s[4][5]and further developed byRoger LyndonandPaul Schupp.[6]It studiesvan Kampen diagrams, corresponding to finite group presentations, via combinatorial curvature conditions and derives algebraic and algorithmic properties of groups from such analysis. Bass–Serre theory, introduced in the 1977 book of Serre,[7]derives structural algebraic information about groups by studying group actions onsimplicial trees. External precursors of geometric group theory include the study of lattices in Lie groups, especiallyMostow's rigidity theorem, the study ofKleinian groups, and the progress achieved inlow-dimensional topologyand hyperbolic geometry in the 1970s and early 1980s, spurred, in particular, byWilliam Thurston'sGeometrization program. The emergence of geometric group theory as a distinct area of mathematics is usually traced to the late 1980s and early 1990s. It was spurred by the 1987 monograph ofMikhail Gromov"Hyperbolic groups"[8]that introduced the notion of ahyperbolic group(also known asword-hyperbolicorGromov-hyperbolicornegatively curvedgroup), which captures the idea of a finitely generated group having large-scale negative curvature, and by his subsequent monographAsymptotic Invariants of Infinite Groups,[9]that outlined Gromov's program of understanding discrete groups up toquasi-isometry. The work of Gromov had a transformative effect on the study of discrete groups[10][11][12]and the phrase "geometric group theory" started appearing soon afterwards. (see e.g.[13]). Notable themes and developments in geometric group theory in 1990s and 2000s include: The following examples are often studied in geometric group theory: These texts cover geometric group theory and related topics.
https://en.wikipedia.org/wiki/Geometric_group_theory
TheShannon number, named after the American mathematicianClaude Shannon, is a conservative lower bound of thegame-tree complexityofchessof 10120, based on an average of about 103possibilities for a pair of moves consisting of a move for White followed by a move for Black, and a typical game lasting about 40 such pairs of moves. Shannon showed a calculation for the lower bound of the game-tree complexity of chess, resulting in about 10120possible games, to demonstrate the impracticality ofsolving chessbybrute force, in his 1950 paper "Programming a Computer for Playing Chess".[1](This influential paper introduced the field ofcomputer chess.) Shannon also estimated the number of possible positions, of the general order of63!32!8!2{\displaystyle {\frac {63!}{32!{8!}^{2}}}}, or roughly3.7∗1043{\displaystyle 3.7*10^{43}}. This includes some illegal positions (e.g., pawns on the first rank, both kings in check) and excludes legal positions following captures and promotions. After each player has moved a piece 5 times each (10ply) there are 69,352,859,712,417 possible games that could have been played. Taking Shannon's numbers into account,Victor Alliscalculated anupper boundof 5×1052for the number of positions, and estimated the true number to be about 1050.[4]Later work proved an upper bound of 8.7×1045,[5]and showed an upper bound 4×1037in the absence of promotions.[6][7] Allis also estimated the game-tree complexity to be at least 10123, "based on an average branching factor of 35 and an average game length of 80". As a comparison, thenumber of atoms in the observable universe, to which it is often compared, is roughly estimated to be 1080. John Trompand Peter Österlund estimated the number of legal chess positions with a 95% confidence level at(4.822±0.028)×1044{\displaystyle (4.822\pm 0.028)\times 10^{44}}, based on an efficiently computable bijection between integers and chess positions.[5] As a comparison to the Shannon number, if chess is analyzed for the number of "sensible" games that can be played (not counting ridiculous or obvious game-losing moves such as moving a queen to be immediately captured by a pawn without compensation), then the result is closer to around 1040games. This is based on having a choice of about three sensible moves at each ply (half-move), and a game length of 80 plies (or, equivalently, 40 moves).[8]
https://en.wikipedia.org/wiki/Shannon_number
Speculative Store Bypass(SSB) (CVE-2018-3639) is the name given to a hardware security vulnerability and its exploitation that takes advantage ofspeculative executionin a similar way to theMeltdownandSpectresecurity vulnerabilities.[1]It affects theARM,AMDandIntelfamilies of processors. It was discovered by researchers atMicrosoft Security Response CenterandGoogle Project Zero(GPZ).[2]After being leaked on 3 May 2018 as part of a group of eight additional Spectre-class flaws provisionally namedSpectre-NG,[3][4][5][6]it was first disclosed to the public as "Variant 4" on 21 May 2018, alongside a related speculative execution vulnerability designated "Variant 3a".[7][1] Speculative execution exploit Variant 4,[8]is referred to as Speculative Store Bypass (SSB),[1][9]and has been assignedCVE-2018-3639.[7]SSB is named Variant 4, but it is the fifth variant in the Spectre-Meltdown class of vulnerabilities.[7] Steps involved in exploit:[1] Intel claims that web browsers that are already patched to mitigate Spectre Variants 1 and 2 are partially protected against Variant 4.[7]Intel said in a statement that the likelihood of end users being affected was "low" and that not all protections would be on by default due to some impact on performance.[10]The Chrome JavaScript team confirmed that effective mitigation of Variant 4 in software is infeasible, in part due to performance impact.[11] Intel is planning to address Variant 4 by releasing amicrocodepatch that creates a new hardware flag namedSpeculative Store Bypass Disable(SSBD).[7][2][12]Astablemicrocode patch is yet to be delivered, with Intel suggesting that the patch will be ready "in the coming weeks".[needs update][7]Many operating system vendors will be releasing software updates to assist with mitigating Variant 4;[13][2][14]however, microcode/firmwareupdates are required for the software updates to have an effect.[13]
https://en.wikipedia.org/wiki/Speculative_Store_Bypass
Irregular warfare(IW) is defined inUnited Statesjoint doctrine as "a violent struggle among state and non-state actors for legitimacy and influence over the relevant populations" and in U.S. law as "Department of Defense activities not involving armed conflict that support predetermined United States policy and military objectives conducted by, with, and through regular forces, irregular forces, groups, and individuals."[1][2]In practice, control of institutions and infrastructure is also important. Concepts associated with irregular warfare are older than the term itself.[3] Irregular warfare favors indirect warfare andasymmetric warfareapproaches, though it may employ the full range of military and other capabilities in order to erode the adversary's power, influence, and will. It is inherently a protracted struggle that will test the resolve of astateand its strategic partners.[4][5][6][7][8] The term "irregular warfare" in Joint doctrine was settled upon in distinction from "traditional warfare" and "unconventional warfare", and to differentiate it as such; it is unrelated to the distinction between "regular" and "irregular forces".[9] One of the earliest known uses of the termirregular warfareisCharles Edward Callwell's classic 1896 publication for theUnited KingdomWar Office,Small Wars: Their Principles and Practices, where he noted in defining 'small wars': "Small wars include the partisan warfare which usually arises when trained soldiers are employed in the quelling of sedition and of insurrections in civilised countries; they include campaigns of conquest when a Great Power adds the territory of barbarous races to its possessions; and they include punitive expeditions against tribes bordering upon distant colonies....Whenever a regular army finds itself engaged upon hostilities against irregular forces, or forces which in their armament, their organization, and their discipline are palpably inferior to it, the conditions of the campaign become distinct from the conditions of modern regular warfare, and it is with hostilities of this nature that this volume proposes to deal. Upon the organization of armies for irregular warfare valuable information is to be found in many instructive military works, official and non-official."[10] A similar usage appears in the 1986 English edition of "Modern Irregular Warfare in Defense Policy and as a Military Phenomenon" by formerNaziofficerFriedrich August Freiherr von der Heydte. The original 1972 German edition of the book is titled "Der Moderne Kleinkrieg als Wehrpolitisches und Militarisches Phänomen". The German word "Kleinkrieg" is literally translated as "Small War."[11]The word "Irregular," used in the title of the English translation of the book, seems to be a reference to non "regular armed forces" as per theThird Geneva Convention. Another early use of the term is in a 1996Central Intelligence Agency(CIA) document by Jeffrey B. White.[12]Majormilitary doctrinedevelopments related to IW were done between 2004 and 2007[13]as a result of theSeptember 11 attackson theUnited States.[14][15][unreliable source?]A key proponent of IW within US Department of Defense (DoD) isMichael G. Vickers, a former paramilitary officer in the CIA.[16]The CIA'sSpecial Activities Center(SAC) is the premiere Americanparamilitaryclandestineunit for creating and for combating irregular warfare units.[17][18][19]For example, SAC paramilitary officers created and led successful irregular units from the Hmong tribe during the war in Laos in the 1960s,[20]from theNorthern Allianceagainst theTalibanduring the war in Afghanistan in 2001,[21]and from theKurdishPeshmergaagainstAnsar al-Islamand the forces ofSaddam Husseinduring the war in Iraq in 2003.[22][23][24] Nearly all modern wars include at least some element of irregular warfare. Since the time of Napoleon, approximately 80% of conflict has been irregular in nature. However, the following conflicts may be considered to have exemplified by irregular warfare:[3][12] Activities and types of conflict included in IW are: According to the DoD, there are five core activities of IW: As a result of DoD Directive 3000.07,[6]United States armed forcesare studying[when?]irregular warfare concepts usingmodeling and simulation.[29][30][31] There have been several militarywargamesandmilitary exercisesassociated with IW, including: Individuals:
https://en.wikipedia.org/wiki/Irregular_warfare
Aheterogram(fromhetero-, meaning 'different', +-gram, meaning 'written') is a word, phrase, or sentence in which noletterof the alphabet occurs more than once. The termsisogramandnonpattern wordhave also been used to mean the same thing.[1][2][3] It is not clear who coined or popularized the term "heterogram". The concept appears inDmitri Borgmann's 1965 bookLanguage on Vacation: An Olio of Orthographical Odditiesbut he uses the termisogram.[4]In a 1985 article, Borgmann claims to have "launched" the termisogramthen.[5]He also suggests an alternative term,asogram, to avoid confusion with lines of constant value such ascontour lines, but usesisogramin the article itself. Isogramhas also been used to mean a string where each letter present is used the same number of times.[6][2][7]Multiple terms have been used to describe words where each letter used appears a certain number of times. For example, a word where every featured letter appears twice, like "noon", might be called apair isogram,[8]asecond-order isogram,[2]or a2-isogram.[3] A perfectpangramis an example of a heterogram, with the added restriction that it uses all the letters of the alphabet. A ten-letter heterogram can be used as the key to asubstitution cipherfor numbers, with the heterogram encoding the string 1234567890 or 0123456789. This is used in businesses where salespeople and customers traditionally haggle over sales prices, such as used-car lots and pawn shops. The nominal value or minimum sale price for an item can be listed on a tag for the salesperson's reference while being visible but meaningless to the customer.[9][10] A twelve-letter cipher could be used to indicate months of the year. In the bookLanguage on Vacation: An Olio of Orthographical Oddities,Dmitri Borgmanntries to find the longest such word. The longest one he found was "dermatoglyphics" at 15 letters. He coins several longer hypothetical words, such as "thumbscrew-japingly" (18 letters, defined as "as if mocking athumbscrew") and, with the "uttermost limit in the way of verbal creativeness", "pubvexingfjord-schmaltzy" (23 letters, defined as "as if in the manner of the extremesentimentalismgenerated in some individuals by the sight of a majesticfjord, which sentimentalism is annoying to the clientele of an English inn").[4] The word "subdermatoglyphic" was constructed by Edward R. Wolpow.[11]Later, in the bookMaking the Alphabet Dance,[12]Ross Ecklerreports the word "subdermatoglyphic" (17 letters) can be found in an article by Lowell Goldsmith calledChaos: To See a World in a Grain of Sand and a Heaven in a Wild Flower.[13]He also found the name "Melvin Schwarzkopf" (17 letters), a man living inAlton, Illinois, and proposed the name "Emily Jung Schwartzkopf" (21 letters). In an elaborate story, Eckler talked about a group of scientists who name the unavoidable urge to speak inpangramsthe "Hjelmqvist-Gryb-Zock-Pfund-Wax syndrome". The longest German heterogram is "Heizölrückstoßabdämpfung" (heating oil recoil dampening) which uses 24 of the 30 letters in theGerman alphabet, asä,ö,ü, andßare considered distinct letters froma,o,u, andsin German.[citation needed]It is closely followed by "Boxkampfjuryschützlinge" (boxing-match jury protégés) and "Zwölftonmusikbücherjagd" (twelve-tone music book chase) with 23 letters.[citation needed] There are hundreds of eleven-letter isograms, over one thousand ten-letter isograms and thousands of such nine-letter words.[14]
https://en.wikipedia.org/wiki/Isogram
Abest current practice, abbreviated asBCP,[1]is ade factolevel of performance in engineering and information technology. It is more flexible than astandard, since techniques and tools are continually evolving. TheInternet Engineering Task Forcepublishes Best Current Practice documents in a numbered document series. Each document in this series is paired with the currently validRequest for Comments(RFC) document. BCP was introduced in RFC-1818.[2] BCPs are document guidelines, processes, methods, and other matters not suitable for standardization. TheInternet standards processitself is defined in a series of BCPs, as is the formal organizational structure of the IETF,Internet Engineering Steering Group,Internet Architecture Board, and other groups involved in that process. IETF's separateStandard Track(STD) document series defines the fully standardized network protocols of the Internet, such as theInternet Protocol, theTransmission Control Protocol, and theDomain Name System. Each RFC number refers to a specific version of a document Standard Track, but the BCP number refers to the most recent revision of the document. Thus, citations often reference both the BCP number and the RFC number. Example citations for BCPs are:BCP 38,RFC 2827.
https://en.wikipedia.org/wiki/Best_current_practice
TheInternational Software Testing Qualifications Board(ISTQB) is asoftware testing certification boardthat operates internationally.[1]Founded in Edinburgh in November 2002, the ISTQB is a non-profit association legally registered in Belgium. ISTQB Certified Tester is a standardized qualification for software testers and the certification is offered by the ISTQB. The qualifications are based on a syllabus, and there is a hierarchy of qualifications and guidelines for accreditation and examination. More than 1 million ISTQB exams have been delivered and over 721,000 certifications issued; the ISTQB consists of 67 member boards worldwide representing more than 100 countries as of April 2021.[2] Current ISTQB product portfolio follows a matrix approach[3]characterized by ISTQB streams focus on: Pre-conditions relate to certification exams[4]and provide a natural progression through the ISTQB Scheme which helps people pick the right certificate and informs them about what they need to know. The ISTQB Core Foundation is a pre-condition for any other certification. Additional rules for ISTQB pre-conditions are summarized in the following: Such rules are depicted from a graphical point of view in the ISTQB Product Portfolio map. ISTQB provides a list of referenced books from some previous syllabi online.[5] The Foundation and Advanced exams consist ofmultiple choicetests.[6] Certification is valid for life (Foundation Level and Advanced Level), and there is no requirement for recertification. ISTQB Member boards are responsible for the quality and the auditing of the examination. Worldwide there are testing boards in 67 countries (date: April 2021). Authorized exam providers are also able to offer exams including e-exams. The current list of exam provider you can fine on the dedicated page.[7] The current ISTQB Foundation Level certification is based on the 2023 syllabus. The Foundation Level qualification is suitable for anyone who needs to demonstrate practical knowledge of the fundamental concepts of software testing including people in roles such as testers, test analysts, test engineers, test consultants, test managers, user acceptance testers and software developers.[8] It is also appropriate for individuals who need a basic understanding of software testing including project managers, quality managers, software development managers, business analysts, IT directors and management consultants.[8] The different Advanced Level exams are more practical and require deeper knowledge in special areas. Test Manager deals with planning and control of thetest process. Test Analyst concerns, among other things,reviewsandblack box testingmethods. Technical Test Analyst includescomponent tests(also calledunit test), requiring knowledge ofwhite box testingand non-functional testing methods – this section also includestest tools.
https://en.wikipedia.org/wiki/International_Software_Testing_Qualifications_Board
Code Access Security(CAS), in theMicrosoft .NETframework, isMicrosoft's solution to prevent untrusted code from performing privileged actions. When theCLRloads anassemblyit will obtainevidencefor the assembly and use this to identify thecode groupthat the assembly belongs to. A code group contains a permission set (one or morepermissions). Code that performs a privileged action will perform a code accessdemandwhich will cause the CLR to walk up thecall stackand examine the permission set granted to the assembly of eachmethodin the call stack. The code groups and permission sets are determined by the administrator of the machine who defines thesecurity policy. Microsoft considers CAS as obsolete and discourages its use.[1]It is also not available in .NET Core and .NET. Evidence can be any information associated with an assembly. The default evidences that are used by .NET code access security are: A developer can use custom evidence (so-called assembly evidence) but this requires writing a security assembly and in version 1.1[clarification needed]of .NET this facility does not work. Evidence based on a hash of the assembly is easily obtained in code. For example, inC#, evidence may be obtained by the following code clause: A policy is a set of expressions that uses evidence to determine a code group membership. A code group gives a permission set for the assemblies within that group. There are four policies in .NET: The first three policies are stored inXMLfiles and are administered through the .NET Configuration Tool 1.1 (mscorcfg.msc). The final policy is administered through code for the current application domain. Code access security will present an assembly's evidence to each policy and will then take the intersection (that is the permissions common to all the generated permission sets) as the permissions granted to the assembly. By default, the Enterprise, User, and AppDomain policies give full trust (that is they allow all assemblies to have all permissions) and the Machine policy is more restrictive. Since the intersection is taken, this means that the final permission set is determined by the Machine policy. Note that the policy system has been eliminated in .NET Framework 4.0.[2] Code groups associate a piece of evidence with a named permission set. The administrator uses the .NET Configuration Tool to specify a particular type of evidence (for example, Site) and a particular value for that evidence (for example, www.mysite.com) and then identifies the permission set that the code group will be granted. Code that performs some privileged action will make a demand for one or more permissions. The demand makes the CLR walk the call stack and for each method the CLR will ensure that the demanded permissions are in the method's assembly's granted permissions. If the permission is not granted then a securityexceptionis thrown. This prevents downloaded code from performing privileged actions. For example, if an assembly is downloaded from an untrusted site the assembly will not have any file IO permissions and so if this assembly attempts to access a file, will throw an exception preventing the call.
https://en.wikipedia.org/wiki/Code_Access_Security
Indigital circuitsandmachine learning, aone-hotis a group ofbitsamong which the legal combinations of values are only those with a single high (1) bit and all the others low (0).[1]A similar implementation in which all bits are '1' except one '0' is sometimes calledone-cold.[2]Instatistics,dummy variablesrepresent a similar technique for representingcategorical data. One-hot encoding is often used for indicating the state of astate machine. When usingbinary, adecoderis needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in thenth state if, and only if, thenth bit is high. Aring counterwith 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15flip-flopschained in series with the Q output of each flip-flop connected to the D input of the next and the D input of the first flip-flop connected to the Q output of the 15th flip-flop. The first flip-flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip-flop, which represents the last state. Upon reset of the state machine all of the flip-flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip-flops advances the one 'hot' bit to the second flip-flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state. Anaddress decoderconverts from binary to one-hot representation. Apriority encoderconverts from one-hot representation to binary. Innatural language processing, a one-hot vector is a 1 ×Nmatrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary.[5]The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'. In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part.[6] Categorical data can be eithernominalorordinal.[7]Ordinal data has a ranked order for its values and can therefore be converted to numerical data through ordinal encoding.[8]An example of ordinal data would be the ratings on a test ranging from A to F, which could be ranked using numbers from 6 to 1. Since there is no quantitative relationship between nominal variables' individual values, using ordinal encoding can potentially create a fictional ordinal relationship in the data.[9]Therefore, one-hot encoding is often applied to nominal variables, in order to improve the performance of the algorithm. For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).[citation needed] Because this process creates multiple new variables, it is prone to creating a 'big p' problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.[citation needed] Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.[10] In practical usage, this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables. An example would be the dummyVars function of the Caret library in R.[11]
https://en.wikipedia.org/wiki/One-hot_code
Metaknowledgeormeta-knowledgeisknowledgeabout knowledge.[1] Some authors divide meta-knowledge into orders: Other authors call zero order meta-knowledgefirst order knowledge, and call first order meta-knowledgesecond order knowledge; meta-knowledge is also known ashigher order knowledge.[3] Meta-knowledge is a fundamental conceptual instrument in such research and scientific domains as,knowledge engineering,knowledge management, and others dealing with study and operations on knowledge, seen as a unifiedobject/entities, abstracted from local conceptualizations and terminologies. Examples of the first-level individual meta-knowledge are methods of planning, modeling,tagging, learning and every modification of adomain knowledge. Indeed, universal meta-knowledge frameworks have to be valid for the organization of meta-levels of individual meta-knowledge. Meta-knowledge may be automatically harvested from electronic publication archives, to reveal patterns in research, relationships between researchers and institutions and to identify contradictory results.[1] This article aboutepistemologyis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Meta-knowledge
Descriptionis any type of communication that aims to make vivid a place, object, person, group, or other physical entity.[1]It is one of fourrhetorical modes(also known asmodes of discourse), along withexposition,argumentation, andnarration.[2] Fiction writing specifically has modes such asaction, exposition, description,dialogue, summary, and transition.[3]AuthorPeter Selginrefers tomethods, including action, dialogue, thoughts, summary,scenes, and description.[4] Description is the mode for transmitting a mental image of the particulars of a story. Together with dialogue, narration, exposition, and summarization, it is one of the most widely recognized of the fiction-writing modes. As stated inWriting from A to Z, edited by Kirk Polking, it is more than the amassing of details; it is bringing something to life by carefully choosing and arranging words and phrases to produce the desired effect.[5] A purple patch is an over-written passage in which the writer has strained too hard to achieve an impressive effect, by elaborate figures or other means. The phrase (Latin: "purpureus pannus") was first used by the Roman poetHoracein hisArs Poetica(c. 20BC) to denote an irrelevant and excessively ornate passage; the sense of irrelevance is normally absent in modern usage, although such passages are usually incongruous. By extension,purple proseis lavishly figurative, rhythmic, or otherwise overwrought.[6] In philosophy, thenature of descriptionhas been an important question sinceBertrand Russell's classical texts.[7]
https://en.wikipedia.org/wiki/Description
Tagsis aUnicode blockcontaining formatting tag characters. The block is designed to mirrorASCII. It was originally intended for language tags, but has now been repurposed as emoji modifiers, specifically for region flags. U+E0001, U+E0020–U+E007F were originally intended for invisibly tagging texts by language[3]but that use is no longer recommended.[4]All of those characters were deprecated in Unicode 5.1. With the release of Unicode 8.0, U+E0020–U+E007E are no longer deprecated characters. The change was made "to clear the way for the potential future use of tag characters for a purpose other than to represent language tags".[5]Unicode states that "the use of tag characters to represent language tags in a plain text stream is still a deprecated mechanism for conveying language information about text".[5] With the release of Unicode 9.0, U+E007F is no longer a deprecated character. (U+E0001 LANGUAGE TAG remains deprecated.) The release of Emoji 5.0 in May 2017[6]considers these characters to beemojifor use as modifiers in special sequences. The only usage specified is for representing the flags of regions, alongside the use ofRegional Indicator Symbolsfor national flags.[7]These sequences consist ofU+1F3F4🏴WAVING BLACK FLAGfollowed by a sequence of tags corresponding to the region as coded in theCLDR, thenU+E007FCANCEL TAG. For example, using the tags for "gbeng" (🏴󠁧󠁢󠁥󠁮󠁧󠁿) will cause some systems to display theflag of England, those for "gbsct" (🏴󠁧󠁢󠁳󠁣󠁴󠁿) theflag of Scotland, and those for "gbwls" (🏴󠁧󠁢󠁷󠁬󠁳󠁿) theflag of Wales.[7] The tag sequences are derived fromISO 3166-2, but sequences representing other subnational flags (for exampleUS states) are also possible using this mechanism. However, as of Unicode version 12.0 only the three flag sequences listed above are "Recommended for General Interchange" by the Unicode Consortium, meaning they are "most likely to be widely supported across multiple platforms".[8] Tags have been used to create invisibleprompt injectionsonLLMs.[9] The following Unicode-related documents record the purpose and process of defining specific characters in the Tags block:
https://en.wikipedia.org/wiki/Tags_(Unicode_block)
Incomputer science, aparallel random-access machine(parallel RAMorPRAM) is ashared-memoryabstract machine. As its name indicates, the PRAM is intended as the parallel-computing analogy to therandom-access machine(RAM) (not to be confused withrandom-access memory).[1]In the same way that the RAM is used by sequential-algorithm designers to model algorithmic performance (such as time complexity), the PRAM is used by parallel-algorithm designers to model parallel algorithmic performance (such as time complexity, where the number of processors assumed is typically also stated). Similar to the way in which the RAM model neglects practical issues, such as access time to cache memory versus main memory, the PRAM model neglects such issues assynchronizationandcommunication, but provides any (problem-size-dependent) number of processors. Algorithm cost, for instance, is estimated using two parameters O(time) and O(time × processor_number). Read/write conflicts, commonly termed interlocking in accessing the same shared memory location simultaneously are resolved by one of the following strategies: Here, E and C stand for 'exclusive' and 'concurrent' respectively. The read causes no discrepancies while the concurrent write is further defined as: Several simplifying assumptions are made while considering the development of algorithms for PRAM. They are: These kinds of algorithms are useful for understanding the exploitation of concurrency, dividing the original problem into similar sub-problems and solving them in parallel. The introduction of the formal 'P-RAM' model in Wyllie's 1979 thesis[4]had the aim of quantifying analysis of parallel algorithms in a way analogous to theTuring Machine. The analysis focused on a MIMD model of programming using a CREW model but showed that many variants, including implementing a CRCW model and implementing on an SIMD machine, were possible with only constant overhead. PRAM algorithms cannot be parallelized with the combination ofCPUanddynamic random-access memory(DRAM) because DRAM does not allow concurrent access to a single bank (not even different addresses in the bank); but they can be implemented in hardware or read/write to the internalstatic random-access memory(SRAM) blocks of afield-programmable gate array(FPGA), it can be done using a CRCW algorithm. However, the test for practical relevance of PRAM (or RAM) algorithms depends on whether their cost model provides an effective abstraction of some computer; the structure of that computer can be quite different than the abstract model. The knowledge of the layers of software and hardware that need to be inserted is beyond the scope of this article. But, articles such asVishkin (2011)demonstrate how a PRAM-like abstraction can be supported by theexplicit multi-threading(XMT) paradigm and articles such asCaragea & Vishkin (2011)demonstrate that a PRAM algorithm for themaximum flow problemcan provide strong speedups relative to the fastest serial program for the same problem. The articleGhanim, Vishkin & Barua (2018)demonstrated that PRAM algorithms as-is can achieve competitive performance even without any additional effort to cast them as multi-threaded programs on XMT. This is an example ofSystemVerilogcode which finds the maximum value in the array in only 2 clock cycles. It compares all the combinations of the elements in the array at the first clock, and merges the result at the second clock. It uses CRCW memory;m[i] <= 1andmaxNo <= data[i]are written concurrently. The concurrency causes no conflicts because the algorithm guarantees that the same value is written to the same memory. This code can be run onFPGAhardware.
https://en.wikipedia.org/wiki/Parallel_random_access_machine
Graphicsare two-dimensional images. Graphic(s)orThe Graphicmay also refer to:
https://en.wikipedia.org/wiki/Graphic_(disambiguation)
JPL sequencesorJPL codesconsist of twolinear feedback shift registers(LFSRs) whose code sequence lengthsLaandLbmust be prime (relatively prime).[1]In this case the code sequence length of the generated overall sequenceLcis equal to: It is also possible for more than two LFSRs to be interconnected through multipleXORsat the output for as long as all code sequence lengths of the individual LFSR are relatively prime to one another. JPL sequences were originally developed in theJet Propulsion Labs, from which the name for these code sequences is derived. Areas of application include distance measurements utilizingspread spectrumsignals for satellites and in space technology. They are also utilized in the more precise militaryP/Y codeused in theGlobal Positioning System(GPS).[2]However, they are currently replaced by the new M-code. Due to the relatively long spreading sequences, they can be used to measure relatively long ranges without ambiguities, as required for deep space missions. By having a rough synchronziation between receiver and transmitter, this can be achieved with shorter sequences as well. Their major advantage is, that they produce relatively long sequences with only two LFSRs, which makes it energy efficient and very hard to detect due to huge spreading factor. The same structure can be used to realize a dither generator, used as an additive noise source to remove a numerical bias in digital computations (due to fixed point arithmetics, that have one more negative than positive number, i.e. the mean value is slightly negative). This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/JPL_sequence
Digital architecturerefers to aspects of architecture that featuredigitaltechnologies or considers digital platforms as online spaces. The emerging field of digital architectures therefore applies to both classic architecture as well as the emerging study of social media technologies. Within classic architectural studies, the terminology is used to apply to digital skins that can be streamed images and have their appearance altered. A headquarters building design for Boston television and radio stationWGBHbyPolshek Partnershiphas been discussed as an example of digital architecture and includes a digital skin.[1] Within social media research, digital architecture refers to the technical protocols that enable, constrain, and shape user behavior in a virtual space.[2]Features of social media platforms such as how they facilitate user connections, enable functionality, and generate data are considered key properties that distinguish one digital architecture from another. Architecturecreated digitally might not involve the use of actual materials (brick, stone, glass, steel, wood).[3]It relies on "sets of numbers stored inelectromagneticformat" used to create representations and simulations that correspond to material performance and tomapout built artifacts.[3]It thus can involvedigital twinningfor planned construction or for maintenance management. Digital architecture does not just represent "ideated space"; it also creates places for human interaction that do not resemble physical architectural spaces.[3]Examples of these places in the "Internet Universe" andcyberspaceincludewebsites,multi-user dungeons,MOOs, andweb chatrooms.[3] Digital architecture allows complex calculations that delimit architects and allow a diverse range of complex forms to be created with great ease using computeralgorithms.[4]The new genre of "scripted, iterative, and indexical architecture" produces a proliferation of formal outcomes, leaving the designer the role of selection and increasing the possibilities in architectural design.[4]This has "re-initiated a debate regarding curvilinearity, expressionism and role of technology in society" leading to new forms of non-standard architecture by architects such asZaha Hadid,Kas OosterhuisandUN Studio.[4]A conference held in London in 2009 named "Digital Architecture London" introduced the latest development in digital design practice. TheFar Eastern International Digital Design Award(The Feidad Award) has been in existence since 2000 and honours "innovative design created with the aid of digital media." In 2005 a jury with members including a representative fromQuantum Film,Greg LynnfromGreg Lynn FORM, Jacob van Rijs ofMVRDV,Gerhard Schmitt,Birger Sevaldson(Ocean North), chose among submissions "exploring digital concepts such as computing, information, electronic media, hyper-, virtual-, and cyberspace in order to help define and discuss future space and architecture in the digital age."[5] The concept of digital architectures has a long history in Internet scholarship. Prior tosocial media, scholars focused on how the structure of an online space – such as a forum, website, or blog – shaped the formation of publics and political discourses.[6][7]With the rapid rise of social media, scholars have turned their attention to how the architectural design of social media platforms affects the behavior of influential users, such aspolitical campaigns.[8]This line of research differs from theaffordancesapproach,[9]which focuses on the relationships between users and technology, rather than the digital architecture of the platform.
https://en.wikipedia.org/wiki/Digital_architecture
TheData Analysis and Real World Interrogation Network(DARWIN EU) is aEuropean Union(EU) initiative coordinated by theEuropean Medicines Agency(EMA) to generate and utilizereal world evidence(RWE) to support the evaluation and supervision of medicines across the EU. The project aims to enhance decision-making in regulatory processes by drawing on anonymized data from routine healthcare settings.[1][2][3] DARWIN EU was officially launched in 2022 as part of the EMA's broader strategy to harness big data for public health benefits. The network facilitates access to real-world data from a wide array of sources, including electronic health records, disease registries, hospital databases, and biobanks. These data are standardized using the OMOP (Observational Medical Outcomes Partnership) common data model to ensure interoperability and comparability across datasets.[4][5][6] The key goals of DARWIN EU include: DARWIN EU is managed by a coordination center based atErasmus University Medical CenterinRotterdam,Netherlands. The center is responsible for expanding the network of data partners, managing study requests, and ensuring the scientific quality of outputs.[1] As of early 2024, DARWIN EU had completed 14 studies and had 11 more underway. The EMA plans to scale up DARWIN EU's capacity to deliver over 140 studies annually by 2025.[1][4] As part of the DARWIN EU project scientists at Honeywell'sBrnobranch have developed an AI-powered monitoring system designed to detect early signs of pilot fatigue, inattention, or health issues. Using a camera equipped with artificial intelligence, the system continuously observes the pilot's condition and responds with alerts or wake-up calls if necessary. Even though designed for aviation safety, these technologies could be used in the future to contribute valuable physiological data to the DARWIN EU network—supporting proactive health interventions and contributing to the long-term goals of the European Health Data Space.[7][8] DARWIN EU plays a crucial role in the EU's regulatory ecosystem by integrating real-world data into evidence-based healthcare policymaking. It is instrumental in advancing personalized medicine, pharmacovigilance, and pandemic preparedness through timely, data-driven insights.[1]
https://en.wikipedia.org/wiki/DARWIN_EU
Hummingbirdis the codename given to a significantalgorithmchange inGoogle Searchin 2013. Its name was derived from the speed and accuracy of thehummingbird. The change was announced on September 26, 2013, having already been in use for a month. "Hummingbird" places greater emphasis onnatural languagequeries, considering context and meaning over individualkeywords. It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage. The upgrade marked the most significant change to Google search in years, with more "human" search interactions and a much heavier focus on conversation and meaning.[1]Thus, web developers and writers were encouraged tooptimize their siteswith natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation. Google announced "Hummingbird", a new searchalgorithm, at a September 2013 press event,[2]having already used the algorithm for approximately one month prior to announcement.[3] The "Hummingbird" update was the first major update to Google's search algorithm since the 2010"Caffeine" search architecture upgrade, but even that was limited primarily to improving theindexingof information rather than sorting through information.[3]Amit Singhal, then-search chief at Google, toldSearch Engine Landthat "Hummingbird" was the most dramatic change of the algorithm since 2001, when he first joined Google.[3][4]Unlike previous search algorithms, which would focus on each individual word in the search query, "Hummingbird" considers the context of the different words together, with the goal that pages matching the meaning do better, rather than pages matching just a few words.[5]The name is derived from the speed and accuracy of thehummingbird.[5] "Hummingbird" is aimed at making interactions more human, in the sense that the search engine is capable of understanding the concepts and relationships between keywords.[6]It places greater emphasis on page content, making search results more relevant, and looks at the authority of a page, and in some cases the page author, to determine the importance of a website. It uses this information to better lead users to a specific page on a website rather than the standard website homepage.[7] Search engine optimizationchanged with the addition of "Hummingbird", with web developers and writers encouraged to usenatural languagewhen writing on their websites rather than using forced keywords. They were also advised to make effective use of technical website features, such aspage linking, on-page elements including title tags,URLaddresses andHTML tags, as well as writing high-quality, relevant content without duplication.[8]While keywords within the query still continue to be important, "Hummingbird" adds more strength to long-tailed keywords, effectively catering to the optimization of content rather than just keywords.[7]The use of synonyms has also been optimized; instead of listing results with exact phrases or keywords, Google shows more theme-related results.[9]
https://en.wikipedia.org/wiki/Google_Hummingbird
This is a list of notablesocial software: selected examples ofsocial softwareproducts and services that facilitate a variety of forms of social human contact.
https://en.wikipedia.org/wiki/List_of_social_software
Amanaged service company(MSC) is a form of company structure in theUnited Kingdomdesigned to reduce the individual tax liabilities of thedirectorsandshareholders. This structure is largely born from theIR35legislation of 1999, which came into force in 2000. In an MSC, workers are appointed as shareholders and may also be directors. As shareholders, they can then receive minimum salary payments and the balance of income asdividends. Usually, the service provider would perform administrative andcompany secretaryduties and offer basic taxation advice. This structure became popular with independent contractors and was used as a way of earning high net returns (up to 85% of gross)[citation needed]compared toPAYE, with little corporate responsibilities. In return, the providers charged a fee for delivering the service. To work within this form, workers must usually passIR35tests to ensure they can make dividend payments. In December 2006 the UK Treasury/HMRCintroduced draft legislation "Tackling Managed Service Legislation" which sought to address the use of "composite" structures to avoid Income Tax and National Insurance on forms of trading that the Treasury deemed as being akin to "employed". After a period of consultation and re-draft, the new legislation became law in April 2007 with additional aspects coming into force in August 2007 and fully in January 2008. A PAYEumbrella companyis effectively exempted from the legislation, which also seeks to pass the possible burden of unpaid debt (should a provider "collapse" a structure) to interested parties e.g. A recruitment agency that has been deemed to encourage or facilitate the scheme. Several MSC providers have since withdrawn from the market and have either converted toPAYEoperations or sought to become seen as true accountants rather than scheme promoters. Managed service companies (MSC) differ from personal service companies (PSC) as the MSC manages and controls the affairs of the business, not the contractor. The 2007 Budget legislated against Managed Service Companies (MSCs) by removing the associated tax advantages for contractors working through them. Prior to this government action, there were several types of MSCs. One of the most common forms was the composite company, where typically up to 20 contractors became non-director shareholders. These contractors received a low salary and expenses, with the remainder paid as dividends. This method of remuneration offered significant financial benefits, as it avoided the payment of national insurance contributions and income tax that would otherwise have been due if the contractor was paid entirely under PAYE (salary). HMRC became increasingly frustrated with the use of MSCs. When investigated, these companies could quickly liquidate (as they held no assets) and begin trading under a new company name the very next day. Following the MSC legislation, it is now the responsibility of an MSC provider to correctly operate PAYE and deduct the necessary tax and national insurance contributions on all income paid to a subcontractor. To strengthen this law, the government has allowed the recovery of any underpaid taxes from relevant third parties—primarily those behind the MSC as well as connected or controlling parties Some companies still offer variations on these schemes, so it can be confusing to a contractor to know what is legal and what is not. The simplest way[according to whom?]to operate compliantly is to work for one's own PSC, and not delegate control or key decisions to a third party supplier.
https://en.wikipedia.org/wiki/Managed_service_company
Data retrievalmeans obtaining data from adatabase management system(DBMS), like for example anobject-oriented database(ODBMS). In this case, it is considered that data is represented in astructuredway, and there is noambiguityin data. In order to retrieve the desired data the user presents a set of criteria by aquery. Then the database management system selects the demanded data from the database. The retrieved data may be stored in a file, printed, or viewed on the screen. Aquery language, like for exampleStructured Query Language(SQL), is used to prepare the queries. SQL is anAmerican National Standards Institute(ANSI) standardized query language developed specifically to write database queries. Each database management system may have its own language, but most are relational.[clarification needed] Reportsand queries are the two primary forms of the retrieved data from a database. There are some overlaps between them, but queries generally select a relatively small portion of the database, while reports show larger amounts of data. Queries also present the data in a standard format and usually display it on the monitor; whereas reports allow formatting of the output however you like and is normally printed. Reports are designed using areport generatorbuilt into the database management system.
https://en.wikipedia.org/wiki/Data_retrieval
Incomputer networkresearch,network simulationis a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc.[1]Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions. Anetwork simulatoris asoftwareprogram that can predict the performance of a computer network or a wireless communication network. Since communication networks have become tool complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as5G,Internet of Things(IoT),Wireless LANs,mobile ad hoc networks,wireless sensor networks,vehicular ad hoc networks,cognitive radio networks,LTE Most of the commercialsimulatorsareGUIdriven, while some network simulators areCLIdriven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulationstracefiles would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators usediscrete event simulation, in which a list of pending "events" is stored, and those events are processed in order, with some events triggering future events—such as the event of the arrival of a packet at one node triggering the event the arrival of that packet at adownstreamnode. Network emulationallows users to introduce real devices and applications into a test network (simulated) that alters packet flow in such a way as to mimic the behavior of a live network. Live traffic can pass through the simulator and be affected by objects within the simulation. The typical methodology is that real packets from a live application are sent to the emulation server (where the virtual network is simulated). The real packet gets 'modulated' into a simulation packet. The simulation packet gets demodulated into a real packet after experiencing effects of loss, errors, delay,jitteretc., thereby transferring these network effects into the real packet. Thus it is as-if the real packet flowed through a real network but in reality it flowed through the simulated network. Emulation is widely used in the design stage for validating communication networks prior to deployment. There are both free/open-source and proprietary network simulators available. Examples of notable open source network simulators / emulators include: There are also some notable commercial network simulators. Network simulators provide a cost-effective method for There are a wide variety of network simulators, ranging from the very simple to the very complex. Minimally, a network simulator must enable a user to
https://en.wikipedia.org/wiki/Network_simulation
Adecision cycleordecision loop[1]is a sequence of steps used by an entity on a repeated basis toreach and implement decisionsand to learn from the results. The "decision cycle" phrase has a history of use to broadly categorize various methods of making decisions, going upstream to the need, downstream to the outcomes, and cycling around to connect the outcomes to the needs. A decision cycle is said to occur when an explicitly specifieddecision modelis used to guide adecisionand then the outcomes of that decision are assessed against the need for the decision. This cycle includes specification of desired results (the decision need), tracking of outcomes, and assessment of outcomes against the desired results.
https://en.wikipedia.org/wiki/Decision_cycle
Inmathematics, abinary operationiscommutativeif changing the order of theoperandsdoes not change the result. It is a fundamental property of many binary operations, and manymathematical proofsdepend on it. Perhaps most familiar as a property of arithmetic, e.g."3 + 4 = 4 + 3"or"2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such asdivisionandsubtraction, that do not have it (for example,"3 − 5 ≠ 5 − 3"); such operations arenotcommutative, and so are referred to asnoncommutative operations. The idea that simple operations, such as themultiplicationandadditionof numbers, are commutative was for many centuries implicitly assumed. Thus, this property was not named until the 19th century, when newalgebraic structuresstarted to be studied.[1] Abinary operation∗{\displaystyle *}on asetSiscommutativeifx∗y=y∗x{\displaystyle x*y=y*x}for allx,y∈S{\displaystyle x,y\in S}.[2]An operation that is not commutative is said to benoncommutative.[3] One says thatxcommuteswithyor thatxandycommuteunder∗{\displaystyle *}if[4]x∗y=y∗x.{\displaystyle x*y=y*x.} So, an operation is commutative if every two elements commute.[4]An operation is noncommutative if there are two elements such thatx∗y≠y∗x.{\displaystyle x*y\neq y*x.}This does not exclude the possibility that some pairs of elements commute.[3] Some types ofalgebraic structuresinvolve an operation that does not require commutativity. If this operation is commutative for a specific structure, the structure is often said to becommutative. So, However, in the case ofalgebras, the phrase "commutative algebra" refers only toassociative algebrasthat have a commutative multiplication.[18] Records of the implicit use of the commutative property go back to ancient times. TheEgyptiansused the commutative property ofmultiplicationto simplify computingproducts.[19]Euclidis known to have assumed the commutative property of multiplication in his bookElements.[20]Formal uses of the commutative property arose in the late 18th and early 19th centuries when mathematicians began to work on a theory of functions. Nowadays, the commutative property is a well-known and basic property used in most branches of mathematics.[2] The first recorded use of the termcommutativewas in a memoir byFrançois Servoisin 1814, which used the wordcommutativeswhen describing functions that have what is now called the commutative property.[21]Commutativeis the feminine form of the French adjectivecommutatif, which is derived from the French nouncommutationand the French verbcommuter, meaning "to exchange" or "to switch", a cognate ofto commute. The term then appeared in English in 1838. inDuncan Gregory's article entitled "On the real nature of symbolical algebra" published in 1840 in theTransactions of the Royal Society of Edinburgh.[22]
https://en.wikipedia.org/wiki/Commutative_operation
In computer science, afingerprinting algorithmis a procedure that maps an arbitrarily large data item (remove, as a computer file) to a much shorter bit string, itsfingerprint, that uniquely identifies the original data for all practical purposes just as human fingerprints uniquely identify people for practical purposes. This fingerprint may be used for data deduplication purposes. This is also referred to asfile fingerprinting,data fingerprinting, orstructured data fingerprinting. Fingerprints are typically used to avoid the comparison and transmission of bulky data. For instance, a remove, web browser or proxy server can efficiently check whether a remote, by fetching only its fingerprint and comparing it with that of the previously fetched copy. Fingerprint functions may be seen as high-performance hash functions used to uniquely identify substantial blocks of data where cryptographic functions may be. Special algorithms exist for audio and video fingerprinting. To serve its intended purposes, a fingerprinting algorithm must be able to capture the identity of a file with virtual certainty. In other words, the probability of acollision— two files yielding the same fingerprint — must be negligible, compared to the probability of other unavoidable causes of fatal errors (such as the system being destroyed bywaror by ameteorite): say, 10−20or less. This requirement is somewhat similar to that of achecksumfunction, but is much more stringent. To detect accidental data corruption or transmission errors, it is sufficient that the checksums of the original file and any corrupted version will differ with near certainty, given some statistical model for the errors. In typical situations, this goal is easily achieved with 16- or 32-bit checksums. In contrast, file fingerprints need to be at least64-bitlong to guarantee virtual uniqueness in large file systems (seebirthday attack). When proving the above requirement, one must take into account that files are generated by highly non-random processes that create complicated dependencies among files. For instance, in a typical business network, one usually finds many pairs or clusters of documents that differ only by minor edits or other slight modifications. A good fingerprinting algorithm must ensure that such "natural" processes generate distinct fingerprints, with the desired level of certainty. Computer files are often combined in various ways, such as concatenation (as inarchive files) or symbolic inclusion (as with theC preprocessor's#includedirective). Some fingerprinting algorithms allow the fingerprint of a composite file to be computed from the fingerprints of its constituent parts. This "compounding" property may be useful in some applications, such as detecting when a program needs to be recompiled. Rabin's fingerprinting algorithmis the prototype of the class.[1]It is fast and easy to implement, allows compounding, and comes with a mathematically precise analysis of the probability of collision. Namely, the probability of two stringsrandsyielding the samew-bit fingerprint does not exceed max(|r|,|s|)/2w-1, where |r| denotes the length ofrin bits. The algorithm requires the previous choice of aw-bit internal "key", and this guarantee holds as long as the stringsrandsare chosen without knowledge of the key. Rabin's method is not secure against malicious attacks. An adversarial agent can easily discover the key and use it to modify files without changing their fingerprint. Mainstreamcryptographicgrade hash functions generally can serve as high-quality fingerprint functions, are subject to intense scrutiny fromcryptanalysts, and have the advantage that they are believed to be safe against malicious attacks. A drawback of cryptographic hash algorithms such asMD5andSHAis that they take considerably longer to execute than Rabin's fingerprint algorithm. They also lack proven guarantees on the collision probability. Some of these algorithms, notablyMD5, are no longer recommended for secure fingerprinting. They are still useful for error checking, where purposeful data tampering is not a primary concern. NISTdistributes a software reference library, the AmericanNational Software Reference Library, that uses cryptographic hash functions to fingerprint files and map them to software products. TheHashKeeperdatabase, maintained by theNational Drug Intelligence Center, is a repository of fingerprints of "known to be good" and "known to be bad" computer files, for use in law enforcement applications (e.g. analyzing the contents of seized disk drives). Fingerprinting is currently the most widely applied approach to content similarity detection. This method forms representative digests of documents by selecting a set of multiple substrings (n-grams) from them. The sets represent the fingerprints and their elements are called minutiae.[4][5]
https://en.wikipedia.org/wiki/Fingerprint_(computing)
Titaniumis a very advancedbackdoormalwareAPT, developed byPLATINUM, acybercrimecollective. The malware was uncovered byKaspersky Laband reported on 8 November 2019.[1][2][3][4][5][6][7]According toGlobal Security Mag, "Titanium APT includes a complex sequence of dropping, downloading and installing stages, with deployment of a Trojan-backdoor at the final stage."[2]Much of the sequence is hidden from detection in a sophisticated manner, including hiding datasteganographicallyin aPNG image.[3]In their announcement report, Kaspersky Lab concluded: "The Titanium APT has a very complicated infiltration scheme. It involves numerous steps and requires good coordination between all of them. In addition, none of the files in the file system can be detected as malicious due to the use of encryption andfilelesstechnologies. One other feature that makes detection harder is the mimicking of well-known software. Regarding campaign activity, we have not detected any current activity [as of 8 November 2019] related to the Titanium APT."[1]
https://en.wikipedia.org/wiki/Titanium_(malware)
Aweb container(also known as a servlet container;[1]and compare "webcontainer"[2]) is the component of aweb serverthat interacts withJakarta Servlets. A web container is responsible for managing the lifecycle of servlets, mapping aURLto a particular servlet and ensuring that the URL requester has the correct access-rights. A web container handles requests toservlets,Jakarta Server Pages(JSP) files, and other types of files that include server-side code. The Web container creates servlet instances, loads and unloads servlets, creates and manages request and response objects, and performs other servlet-management tasks. A web container implements the web component contract of theJakarta EEarchitecture. This architecture specifies aruntime environmentfor additional web components, includingsecurity,concurrency,lifecycle management,transaction, deployment, and other services. The following is a list of notable applications which implement theJakarta Servletspecification fromEclipse Foundation, divided depending on whether they are directly sold or not. Thiscomputer networkingarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Web_container
Instatistics,shrinkageis the reduction in the effects of sampling variation. Inregression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used for fitting.[1]In particular the value of thecoefficient of determination'shrinks'. This idea is complementary tooverfittingand, separately, to the standard adjustment made in the coefficient of determination to compensate for the subjective effects of further sampling, like controlling for the potential of new explanatory terms improving the model by chance: that is, the adjustment formula itself provides "shrinkage." But the adjustment formula yields an artificial shrinkage. Ashrinkage estimatoris anestimatorthat, either explicitly or implicitly, incorporates the effects of shrinkage. In loose terms this means that a naive or raw estimate is improved by combining it with other information. The term relates to the notion that the improved estimate is made closer to the value supplied by the 'other information' than the raw estimate. In this sense, shrinkage is used toregularizeill-posedinferenceproblems. Shrinkage is implicit inBayesian inferenceand penalized likelihood inference, and explicit inJames–Stein-type inference. In contrast, simple types ofmaximum-likelihoodandleast-squares estimationprocedures do not include shrinkage effects, although they can be used within shrinkage estimation schemes. Many standard estimators can beimproved, in terms ofmean squared error(MSE), by shrinking them towards zero (or any other finite constant value). In other words, the improvement in the estimate from the corresponding reduction in the width of the confidence interval can outweigh the worsening of the estimate introduced by biasing the estimate towards zero (seebias-variance tradeoff). Assume that the expected value of the raw estimate is not zero and consider other estimators obtained by multiplying the raw estimate by a certain parameter. A value for this parameter can be specified so as to minimize the MSE of the new estimate. For this value of the parameter, the new estimate will have a smaller MSE than the raw one, and thus it has been improved. An effect here may be to convert anunbiasedraw estimate to an improved biased one. An example arises in the estimation of the populationvariancebysample variance. For a sample size ofn, the use of a divisorn−1 in the usual formula (Bessel's correction) gives an unbiased estimator, while other divisors have lower MSE, at the expense of bias. The optimal choice of divisor (weighting of shrinkage) depends on theexcess kurtosisof the population, as discussed atmean squared error: variance, but one can always do better (in terms of MSE) than the unbiased estimator; for the normal distribution a divisor ofn+1 gives one which has the minimum mean squared error. Types ofregressionthat involve shrinkage estimates includeridge regression, where coefficients derived from a regular least squares regression are brought closer to zero by multiplying by a constant (theshrinkage factor), andlasso regression, where coefficients are brought closer to zero by adding or subtracting a constant. The use of shrinkage estimators in the context of regression analysis, where there may be a large number of explanatory variables, has been described by Copas.[2]Here the values of the estimated regression coefficients are shrunk towards zero with the effect of reducing the mean square error of predicted values from the model when applied to new data. A later paper by Copas[3]applies shrinkage in a context where the problem is to predict a binary response on the basis of binary explanatory variables. Hausser and Strimmer "develop a James-Stein-type shrinkage estimator, resulting in a procedure that is highly efficient statistically as well as computationally. Despite its simplicity, it outperforms eight other entropy estimation procedures across a diverse range of sampling scenarios and data-generating models, even in cases of severe undersampling. ... method is fully analytic and hence computationally inexpensive. Moreover, procedure simultaneously provides estimates of the entropy and of the cell frequencies. The proposed shrinkage estimators of entropy and mutual information, as well as all other investigated entropy estimators, have been implemented in R (R Development Core Team, 2008). A corresponding R package 'entropy' was deposited in the R archive CRAN under the GNU General Public License."[4][5] Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Shrinkage_estimator
Ernest Vincent Wright(1872 – October 7, 1939)[1]was an American writer known for his bookGadsby, a 50,000-word novel which (except for four unintentional instances) does not use the letterE. The biographical details of his life are unclear. A 2002 article in theVillage VoicebyEd Parksaid he might have been English by birth but was more probably American. The article said he might have served in the navy and that he has been incorrectly called a graduate of MIT. The article says that he attended a vocational high school attached to MIT in 1888 but there is no record that he graduated. Park said rumors that Wright died within hours ofGadsbybeing published are untrue.[2] In October 1930, Wright approached theEvening Independentnewspaper and proposed it sponsor a bluelipogramwriting competition, with $250 for the winner. In the letter, he boasted of the quality ofGadsby. The newspaper declined his offer.[3] A 2007 post on theBookrideblog about rare books says Wright spent five and a half months writingGadsbyon a typewriter with the "e" key tied down. According to the unsigned entry at Bookride, a warehouse holding copies ofGadsbyburned down shortly after the book was printed, destroying "most copies of the ill-fated novel". The blog post says the book was never reviewed "and only kept alive by the efforts of a few avant-garde French intellos and assorted connoisseurs of the odd, weird and zany". The book's scarcity and oddness has seen copies priced at $4,000 by book dealers.[4] Wright completed a draft ofGadsbyin 1936, during a nearly six-month stint at the National Military Home in California. He failed to find a publisher and used aself-publishing pressto bring out the book.[4] Wright previously authored three other books:The Wonderful Fairies of the Sun(1896),The Fairies That Run the World and How They Do It(1903), andThoughts and Reveries of an American Bluejacket(1918). His humorous poem, "When Father Carves the Duck", can be found in some anthologies.[5]
https://en.wikipedia.org/wiki/Ernest_Vincent_Wright
Inmathematics, asubsetRof theintegersis called areduced residue system modulonif: Here φ denotesEuler's totient function. A reduced residue system moduloncan be formed from acomplete residue systemmodulonby removing all integers notrelatively primeton. For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-calledtotatives1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. Thecardinalityof this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are:
https://en.wikipedia.org/wiki/Reduced_residue_system
InInternet culture, the1% ruleis a generalrule of thumbpertaining to participation in anInternet community, stating that only 1% of the users of a website actively create new content, while the other 99% of the participants onlylurk. Variants include the1–9–90 rule(sometimes90–9–1 principleor the89:10:1 ratio),[1]which states that in a collaborative website such as awiki, 90% of the participants of a community only consume content, 9% of the participants change or update content, and 1% of the participants add content. Similar rules are known ininformation science; for instance, the 80/20 rule known as thePareto principlestates that 20 percent of a group will produce 80 percent of the activity, regardless of how the activity is defined. According to the 1% rule, about 1% of Internet users create content, while 99% are just consumers of that content. For example, for every person who posts on a forum, generally about 99 other people view that forum but do not post. The term was coined by authors and bloggers Ben McConnell and Jackie Huba,[2]although there were earlier references this concept[3]that did not use the name. The termslurkandlurking, in reference to online activity, are used to refer to online observation without engaging others in the Internet community.[4] A 2007 study ofradicaljihadistInternet forums found 87% of users had never posted on the forums, 13% had posted at least once, 5% had posted 50 or more times, and only 1% had posted 500 or more times.[5] A 2014 peer-reviewed paper entitled "The 1% Rule in Four Digital Health Social Networks: An Observational Study" empirically examined the 1% rule in health-oriented online forums. The paper concluded that the 1% rule was consistent across the four support groups, with a handful of "Superusers" generating the vast majority of content.[6]A study later that year, from a separate group of researchers, replicated the 2014 van Mierlo study in an online forum for depression.[7]Results indicated that the distribution frequency of the 1% rule fit followedZipf's Law, which is a specific type ofpower law. The "90–9–1" version of this rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. However, the actual percentage is likely to vary depending upon the subject. For example, if a forum requires content submissions as a condition of entry, the percentage of people who participate will probably be significantly higher than 1%, but the content producers will still be a minority of users. This is validated in a study conducted by Michael Wu, who uses economics techniques to analyze the participation inequality across hundreds of communities segmented by industry, audience type, and community focus.[8] The 1% rule is often misunderstood to apply to the Internet in general, but it applies more specifically to any given Internet community. It is for this reason that one can see evidence for the 1% principle on many websites, but aggregated together one can see a different distribution. This latter distribution is still unknown and likely to shift, but various researchers and pundits have speculated on how to characterize the sum total of participation. Research in late 2012 suggested that only 23% of the population (rather than 90%) could properly be classified as lurkers, while 17% of the population could be classified as intense contributors of content.[9]Several years prior, results were reported on a sample of students from Chicago where 60% of the sample created content in some form.[10] A similar concept was introduced by Will Hill ofAT&T Laboratories[11]and later cited byJakob Nielsen; this was the earliest known reference to the term "participation inequality" in an online context.[12]The term regained public attention in 2006 when it was used in a strictly quantitative context within a blog entry on the topic of marketing.[2]
https://en.wikipedia.org/wiki/1%25_rule
Sturgeon's law(orSturgeon's revelation) is anadagestating "ninety percent of everything is crap". It was coined byTheodore Sturgeon, an Americanscience fiction authorand critic, and was inspired by his observation that, whilescience fictionwas often derided for its low quality by critics, most work in other fields was low-quality too, and so science fiction was no different.[1] Sturgeon deemed Sturgeon's law to mean "nothing is always absolutely so".[2]By this, he meant his observation (building on "Sturgeon's Revelation" that the majority of everything is of low quality) that the existence of a majority of low-quality content in every genre disproves the idea that any single genre is inherently low-quality. This adage previously appeared in his story "The Claustrophile" in a 1956 issue ofGalaxy.[3] The second adage, variously rendered as "ninety percent of everything is crud" or "ninety percent of everything is crap", was published as "Sturgeon's Revelation" in his book review column forVenture[4]in 1957. However, almost all modern uses of the term Sturgeon's law refer to the second,[citation needed]including the definition listed in theOxford English Dictionary.[5] According to science fiction authorWilliam Tenn, Sturgeon first expressed his law circa 1951, at a talk atNew York Universityattended by Tenn.[6]The statement was subsequently included in a talk Sturgeon gave at a 1953Labor Dayweekend session of theWorld Science Fiction ConventioninPhiladelphia.[7] The first written reference to the adage is in the September 1957 issue ofVenture: And on that hangs Sturgeon’s revelation. It came to him that [science fiction] is indeed ninety-percent crud, but that also – Eureka! –ninety-percent ofeverythingis crud. All things – cars, books, cheeses, hairstyles, people, and pins are, to the expert and discerning eye, crud, except for the acceptable tithe which we each happen to like.[4] The adage appears again in the March 1958 issue ofVenture, where Sturgeon wrote: It is in this vein that I repeat Sturgeon's Revelation, which was wrung out of me after twenty years of wearying defense of science fiction against attacks of people who used the worst examples of the field for ammunition, and whose conclusion was that ninety percent of S.F. is crud. In the 1870 novel,Lothair, byBenjamin Disraeli, it is asserted that: Nine-tenths of existing books are nonsense, and the clever books are the refutation of that nonsense.[9] A similar adage appears inRudyard Kipling'sThe Light That Failed, published in 1890. Four-fifths of everybody's work must be bad. But the remnant is worth the trouble for its own sake.[10] A 1946 essayConfessions of a Book ReviewerbyGeorge Orwellasserts about books: In much more than nine cases out of ten the only objectively truthful criticism would be "This book is worthless ..."[11] In 2009, a paper published inThe Lancetestimated that over 85% of health and medical research is wasted.[12] In 2013, philosopherDaniel Dennettchampioned Sturgeon's law as one of his seven tools for critical thinking.[13] 90% of everything is crap. That is true, whether you are talking about physics, chemistry, evolutionary psychology, sociology, medicine – you name it – rock music, country western. 90% of everything is crap.[14]
https://en.wikipedia.org/wiki/Sturgeon%27s_law
The following tables provide acomparison ofnumerical analysissoftware. [Note 1] [Note 2] [Note 3] Was:Inria (Andrey Ivashov) Theoperating systemsthe software can run on natively (withoutemulation). Colors indicate features available as Theoperating systemsthe software can run on natively (withoutemulation).
https://en.wikipedia.org/wiki/Comparison_of_numerical-analysis_software
Apunched card sorteris a machine forsortingdecks ofpunched cards. Sorting was a major activity in most facilities that processed data on punched cards usingunit record equipment. The work flow of many processes required decks of cards to be put into some specific order as determined by the data punched in the cards. The same deck might be sorted differently for different processing steps. A popular family of sorters, the IBM 80 series sorters, sorted input cards into one of 13 pockets depending on the holes punched in a selected column and the sorter's settings. The basic operation of a card sorter is to take a punched card, examine a single column, and place the card into a selected pocket. There are twelve rows on a punched card, and thirteen pockets in the sorter; one pocket is for blanks, rejects, and errors. (IBM 1962) Cards are normally passed through the sorter face down with the bottom edge ("9-edge") first. A small metal brush or optical sensor is positioned so that, as each card goes through the sorter, one column passes under the brush or optical sensor. The holes sensed in that column together with the settings of the sorter controls determine which pocket the card is to be directed to. This directing is done by slipping the card into a stack of metal strips (orchute blades) that run the length of the sorter feed mechanism. Each blade ends above one of the output pockets, and the card is thus routed to the designated pocket.[1] Multiple column sorting was commonly done by first sorting the least significant column, then proceeding, column by column, to the most significant column. This is called a least significant digitradix sort. Numeric columns have one punch in rows 0-9, possibly a sign overpunch in rows 11-12, and can be sorted in a single pass through the sorter. Alphabetic columns have a zone punch in rows 12, 11, or 0 and a digit punch in one of the rows 1-9, and can be sorted by passing some or all of the cards through the sorter twice on that column. For more details of punched card codes seepunched card#IBM 80-column format and character codes. Several methods were used foralphabetical sorting, depending on the features provided by the particular sorter and the characteristics of the data to be sorted. A commonly used method on the 082 and earlier sorters was to sort the cards twice on the same column, first on digit rows 1-9, and then (after re-stacking) on the zone rows 12, 11, and 0. Operator switches allow zone-sorting by "switching off" rows 1-9 for the second pass of the card for each column. Other specialcharactersandpunctuation markswere added to the cardcode, involving as many as three punches per column (and in 1964 with the introduction ofEBCDICas many as six punches per column). The 083 and 084 sorters recognized these multiple-digit or multiple-zone punches, sorting them to the error pocket. Original census sorting box, 1890, manual.[3] Sorting cards became an issue during the 1900 agricultural census, soHerman Hollerith's company developed the 1901 Hollerith Automatic Horizontal Sorter,[4]a sorter with horizontal pockets.[5] In 1908, he designed the more compact Hollerith 070 Vertical Sorting Machine[6]that sorted 250 cards per minute.[3][5] The Type 71 Vertical Sorter came out in 1928. It had 12 pockets that could hold 80 cards. It could sort 150 cards per minute.[7] The Type 75, Model 1, 19??, 400 cards per minute[3] The Type 75, Model 2, 19??, 250 cards per minute[3] Card Sorters in the IBM 80 series[8]included: In August 1957, a basic 082 rented for $55 per month; an 083 for twice that. (IBM 1957) By 1969, only the 82, 83, and 84 were made available for rental by IBM.[10] In the early 2020s,TCG Machinesintroduced a card sorting machine to processtrading card gamecards.[11]The punched cards and brushes in these modern sorters have been replaced with image sensors (cameras) andcomputer visiontechnology, but their form and operation remain essentially identical to that of their historical predecessors.
https://en.wikipedia.org/wiki/IBM_80_series_Card_Sorters
Agent-based computational economics(ACE) is the area ofcomputational economicsthat studies economic processes, including wholeeconomies, asdynamic systemsof interactingagents. As such, it falls in theparadigmofcomplex adaptive systems.[1]In correspondingagent-based models, the "agents" are "computational objects modeled as interacting according to rules" over space and time, not real people. The rules are formulated to model behavior and social interactions based on incentives and information.[2]Such rules could also be the result of optimization, realized through use of AI methods (such asQ-learningand other reinforcement learning techniques).[3] As part ofnon-equilibrium economics,[4]the theoretical assumption ofmathematical optimizationby agents inequilibriumis replaced by the less restrictive postulate of agents withbounded rationalityadaptingto market forces.[5]ACE models applynumerical methodsof analysis tocomputer-based simulationsof complex dynamic problems for which more conventional methods, such as theorem formulation, may not find ready use.[6]Starting from initial conditions specified by the modeler, the computational economy evolves over time as its constituent agents repeatedly interact with each other, including learning from interactions. In these respects, ACE has been characterized as a bottom-up culture-dish approach to the study ofeconomic systems.[7] ACE has a similarity to, and overlap with,game theoryas an agent-based method for modeling social interactions.[8]But practitioners have also noted differences from standard methods, for example in ACE events modeled being driven solely by initial conditions, whether or not equilibria exist or are computationally tractable, and in the modeling facilitation of agent autonomy and learning.[9] The method has benefited from continuing improvements in modeling techniques ofcomputer scienceand increased computer capabilities. The ultimate scientific objective of the method is to "test theoretical findings against real-world data in ways that permit empirically supported theories to cumulate over time, with each researcher’s work building appropriately on the work that has gone before."[10]The subject has been applied to research areas likeasset pricing,[11]energy systems,[12]competitionandcollaboration,[13]transaction costs,[14]market structureandindustrial organizationand dynamics,[15]welfare economics,[16]andmechanism design,[17]information and uncertainty,[18]macroeconomics,[19]andMarxist economics.[20][21] The "agents" in ACE models can represent individuals (e.g. people), social groupings (e.g. firms), biological entities (e.g. growing crops), and/or physical systems (e.g. transport systems). The ACE modeler provides the initial configuration of a computational economic system comprising multiple interacting agents. The modeler then steps back to observe the development of the system over time without further intervention. In particular, system events should be driven by agent interactions without external imposition of equilibrium conditions.[22]Issues include those common toexperimental economicsin general[23]and development of a common framework for empirical validation[24]and resolving open questions in agent-based modeling.[25] ACE is an officially designated special interest group (SIG) of the Society for Computational Economics.[26]Researchers at theSanta Fe Institutehave contributed to the development of ACE. One area where ACE methodology has frequently been applied is asset pricing.W. Brian Arthur, Eric Baum,William Brock, Cars Hommes, and Blake LeBaron, among others, have developed computational models in which many agents choose from a set of possible forecasting strategies in order to predict stock prices, which affects their asset demands and thus affects stock prices. These models assume that agents are more likely to choose forecasting strategies which have recently been successful. The success of any strategy will depend on market conditions and also on the set of strategies that are currently being used. These models frequently find that large booms and busts in asset prices may occur as agents switch across forecasting strategies.[11][27][28]More recently, Brock, Hommes, and Wagener (2009) have used a model of this type to argue that the introduction of new hedging instruments may destabilize the market,[29]and some papers have suggested that ACE might be a useful methodology for understanding the 2008financial crisis.[30][31][32]See also discussion underFinancial economics § Financial marketsand§ Departures from rationality.
https://en.wikipedia.org/wiki/Agent-based_computational_economics
Incomputerized business management,single version of the truth(SVOT), is a technical concept describing thedata warehousingideal of having either a single centraliseddatabase, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent andnon-redundantform. This contrasts with the related concept ofsingle source of truth(SSOT), which refers to a data storage principle to always source a particular piece of information from one place.[citation needed] In some systems and in the context of message processing systems (oftenreal-time systems), this term also refers to the goal of establishing a single agreed sequence of messages within a database formed by a particular but arbitrary sequencing of records. The key concept is that data combined in a certain sequence is a "truth" which may be analyzed and processed giving particular results, and that although the sequence is arbitrary (and thus another correct but equally arbitrary sequencing would ultimately provide different results in any analysis), it is desirable to agree that the sequence enshrined in the "single version of the truth" is the version that will be considered "the truth", and that any conclusions drawn from analysis of the database are valid and unarguable, and (in a technical context) the database may be duplicated to a backup environment to ensure a persistent record is kept of the "single version of the truth". The key point is when the database is created using an external data source (such as a sequence of trading messages from a stock exchange) an arbitrary selection is made of one possibility from two or more equally valid representations of the input data, but henceforth the decision sets "in stone" one and only one version of the truth. Critics of SVOT as applied to message sequencing argue that this concept is not scalable. As the world moves towards systems spread over many processing nodes, the effort involved in negotiating a single agreed-upon sequence becomes prohibitive. But as pointed out by Owen Rubel at hisAPIWorld talk 'The New API Pattern', the SVOT is always going to be the service layer in a distributed architecture whereinput/output(I/O) meet; this also is where the endpoint binding belongs to allow for modularization and better abstraction of the I/O data across the architecture to avoid the architecturalcross cutting concern.[1] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Single_version_of_the_truth
Inproject management,scopeis the defined features and functions of a product, or the scope of work needed to finish a project.[1]Scope involves getting information required to start a project, including the features the product needs to meet its stakeholders' requirements.[2][3]: 116 Project scope is oriented towards the work required and methods needed, while product scope is more oriented towardfunctional requirements. If requirements are not completely defined and described and if there is no effectivechange controlin a project,scope or requirement creepmay ensue.[4][5]: 434[3]: 13 Scope managementis the process of defining,[3]: 481–483and managing the scope of a project to ensure that it stays on track, within budget, and meets the expectations of stakeholders.
https://en.wikipedia.org/wiki/Scope_(project_management)
Inpropositional logic,conjunction elimination(also calledandelimination,∧ elimination,[1]orsimplification)[2][3][4]is avalidimmediate inference,argument formandrule of inferencewhich makes theinferencethat, if theconjunctionA and Bis true, thenAis true, andBis true. The rule makes it possible to shorten longerproofsby deriving one of the conjuncts of a conjunction on a line by itself. An example inEnglish: The rule consists of two separate sub-rules, which can be expressed informal languageas: and The two sub-rules together mean that, whenever an instance of "P∧Q{\displaystyle P\land Q}" appears on a line of a proof, either "P{\displaystyle P}" or "Q{\displaystyle Q}" can be placed on a subsequent line by itself. The above example in English is an application of the first sub-rule. Theconjunction eliminationsub-rules may be written insequentnotation: and where⊢{\displaystyle \vdash }is ametalogicalsymbol meaning thatP{\displaystyle P}is asyntactic consequenceofP∧Q{\displaystyle P\land Q}andQ{\displaystyle Q}is also a syntactic consequence ofP∧Q{\displaystyle P\land Q}inlogical system; and expressed as truth-functionaltautologiesortheoremsof propositional logic: and whereP{\displaystyle P}andQ{\displaystyle Q}are propositions expressed in someformal system. Thislogic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Conjunction_elimination
No true Scotsmanorappeal to purityis aninformal fallacyin which one modifies a prior claim in response to acounterexampleby asserting the counterexample is excluded by definition.[1][2][3]Rather than admitting error or providing evidence to disprove the counterexample, the original claim is changed by using a non-substantive modifier such as "true", "pure", "genuine", "authentic", "real", or other similar terms.[4][2] PhilosopherBradley Dowdenexplains the fallacy as an "ad hoc rescue" of a refuted generalization attempt.[1]The following is a simplified rendition of the fallacy:[5] Person A: "NoScotsmanputs sugar on hisporridge."Person B: "But my uncle Angus is a Scotsman and he puts sugar on his porridge."Person A: "But notrueScotsman puts sugar on his porridge." The "no true Scotsman" fallacy is committed when the arguer satisfies the following conditions:[3][4][6] An appeal to purity is commonly associated with protecting a preferred group. Scottish national pride may be at stake if someone regularly considered to be Scottish commits a heinous crime. To protect people of Scottish heritage from a possible accusation ofguilt by association, one may use this fallacy to deny that the group is associated with this undesirable member or action. "NotrueScotsman would do something so undesirable"; i.e., the people who would do such a thing aretautologically(definitionally) excluded from being part of our group such that they cannot serve as a counterexample to the group's good nature.[4] The description of the fallacy in this form is attributed to the British philosopherAntony Flew, who wrote, in his 1966 bookGod & Philosophy, In this ungracious move a brash generalization, such asNo Scotsmen put sugar on their porridge, when faced with falsifying facts, is transformed while you wait into an impotent tautology: if ostensible Scotsmen put sugar on their porridge, then this is by itself sufficient to prove them nottrueScotsmen. In his 1975 bookThinking About Thinking, Flew wrote:[4] Imagine some Scottish chauvinist settled down one Sunday morning with his customary copy ofThe News of the World. He reads the story under the headline, "SidcupSex Maniac Strikes Again". Our reader is, as he confidently expected, agreeably shocked: "No Scot would do such a thing!" Yet the very next Sunday he finds in that same favourite source a report of the even more scandalous on-goings of Mr Angus McSporran inAberdeen. This clearly constitutes a counter example, which definitively falsifies the universal proposition originally put forward. ('Falsifies' here is, of course, simply the opposite of 'verifies'; and it therefore means 'shows to be false'.) Allowing that this is indeed such a counter example, he ought to withdraw; retreating perhaps to a rather weaker claim about most or some. But even an imaginary Scot is, like the rest of us, human; and none of us always does what we ought to do. So what he is in fact saying is: "No true Scotsman would do such a thing!" David P. Goldman, writing under his pseudonym "Spengler", compared distinguishing between "mature" democracies, whichnever start wars, and "emerging democracies", which may start them, with the "no true Scotsman" fallacy. Spengler alleges that political scientists have attempted to save the "US academic dogma" that democracies never start wars against other democracies from counterexamples by declaring any democracy which does indeed start a war against another democracy to be flawed, thus maintaining that notrue and maturedemocracy starts a war against a fellow democracy.[5] Cognitive psychologistSteven Pinkerhas suggested that phrases like "no true Christian ever kills, no true communist state is repressive and no trueTrumpsupporter endorses violence" exemplify the fallacy.[7]
https://en.wikipedia.org/wiki/No_true_Scotsman
TheWDR paper computerorKnow-how Computeris an educational model of a computer consisting only of a pen, a sheet of paper, and individual matches in the most simple case.[1]This allows anyone interested to learn how to program without having anelectronic computerat their disposal. The paper computer was created in the early 1980s when computer access was not yet widespread in Germany, to allow people to familiarize themselves with basic computer operation andassembly-like programming languages. It was distributed in over400000copies and at its time belonged to the computers with the widest circulation. The Know-how Computer was developed byWolfgang Back[de]and Ulrich Rohde and was first presented in the television programWDR Computerclub(broadcast byWestdeutscher Rundfunk) in 1983. It was also published in German computer magazinesmcandPC Magazin[de].[2] The original printed version of the paper computer has up to 21 lines of code on the left and eightregisterson the right, which are represented as boxes that contain as manymatchesas the value in the corresponding register.[3]A pen is used to indicate the line of code which is about to be executed. The user steps through the program, adding and subtracting matches from the appropriate registers and following program flow until the stop instruction is encountered. The instruction set of five commands is small butTuring completeand therefore enough to represent all mathematical functions: In the original newspaper article about this computer, it was written slightly differently (translation): [4] An emulator forWindowsis available on Wolfgang Back's website,[5]but a JavaScript emulator also exists.[6]Emulators place fewer restrictions on line count or the number of registers, allowing longer and more complex programs. The paper computer's method of operation is nominally based on aregister machineby Elmar Cohors-Fresenborg,[2][7]but follows more the approach ofJohn Cedric ShepherdsonandHoward E. Sturgisin theirShepherdson–Sturgis register machinemodel.[8] A derived version of the paper computer is used as a "Know-How Computer" inNamibianschool education.[9]
https://en.wikipedia.org/wiki/WDR_paper_computer
Thislist of stage nameslists names used by those in the entertainment industry, alphabetically by theirstage name's surname followed by their birth name. Individuals who dropped their last name and substituted their middle name as their last name are listed. Those with a one-word stage name are listed in aseparate article. In many cases, performers have legally changed their name to their stage name.[1] Note: Many cultures have their own naming customs and systems, some rather intricate. Minor changes or alterations, including reversing Eastern-style formats, do not in and of themselves qualify as stage names and should not normally be included. For example,Björkis not a stage name, it is part of her full Icelandic name, Björk Guðmundsdóttir. Her second name is apatronymicinstead of a family name, followingIcelandic naming conventions. People arenotlisted here if they fall into one or more of the following categories: Note:Elton Johnis listed here because he used the name professionally before he legally adopted it in 1972. Includes stage names that contain numbers or other non-alphabetic characters.
https://en.wikipedia.org/wiki/List_of_stage_names
niceis a program found onUnixandUnix-likeoperating systemssuch asLinux. It directly maps to akernelcallof the same name.niceis used to invoke autilityorshell scriptwith a particularCPU priority, thus giving theprocessmore or less CPU time than other processes. A niceness of -20 is the lowest niceness, or highest priority. The default niceness for processes is inherited from its parent process and is usually 0. Systems have diverged on what priority is the lowest. Linux systems document a niceness of 19 as the lowest priority,[1]BSD systems document 20 as the lowest priority.[2]In both cases, the "lowest" priority is documented as running only when nothing else wants to. Niceness valueis a number attached to processes in *nix systems, that is used along with other data (such as the amount ofI/Odone by each process) by the kernel process scheduler to calculate a process' 'true priority'—which is used to decide how much CPU time is allocated to it. The program's name,nice, is an allusion to its task of modifying a process' niceness value. The termnicenessitself originates from the idea that a process with a higher niceness value isnicerto other processes in the system and to users by virtue of demanding less CPU power—freeing up processing time and power for the more demanding programs, who would in this case be lessniceto the system from a CPU usage perspective.[3] nicebecomes useful when several processes are demanding more resources than theCPUcan provide. In this state, a higher-priority process will get a larger chunk of the CPU time than a lower-priority process. Only thesuperuser(root) may set the niceness to a lower value (i.e. a higher priority). On Linux it is possible to change/etc/security/limits.confto allow other users or groups to set low nice values.[4] If a user wanted to compress a large file without slowing down other processes, they might run the following: The exact mathematical effect of setting a particular niceness value for a process depends on the details of how thescheduleris designed on that implementation of Unix. A particular operating system's scheduler will also have various heuristics built into it (e.g. to favor processes that are mostly I/O-bound over processes that are CPU-bound). As a simple example, when two otherwise identical CPU-bound processes are running simultaneously on a single-CPU Linux system, each one's share of the CPU time will be proportional to 20 −p, wherepis the process' priority. Thus a process, run withnice +15, will receive 25% of the CPU time allocated to a normal-priority process: (20 − 15)/(20 − 0) = 0.25.[5]On theBSD4.x scheduler, on the other hand, the ratio in the same example is about ten to one.[citation needed] The relatedreniceprogram can be used to change the priority of a process that is already running.[1] Linux also has anioniceprogram, which affects scheduling of I/O rather than CPU time.[6]
https://en.wikipedia.org/wiki/Nice_(Unix)
Interchange File Format(IFF) is a genericdigital container file formatoriginally introduced byElectronic Arts(in cooperation withCommodore) in 1985 to facilitate transfer of data between software produced by different companies. IFF files do not have any standardfilename extension. On many systems that generate IFF files, file extensions are not important because theoperating systemstoresfile formatmetadataseparately from thefile name. The.ifffilename extension is commonly used for theILBMimage file format, which uses the IFF container format. Resource Interchange File Formatis a format developed byMicrosoftandIBMin 1991 that is based on IFF, except thebyte orderhas been changed tolittle-endianto match thex86microprocessorarchitecture.Apple'sAudio Interchange File Format(AIFF) is abig-endianaudio file formatdeveloped from IFF. TheTIFFimage file format is not related to IFF. An IFF file is built up fromchunks. Each chunk begins with what the specification calls a "Type ID" (what theMacintoshcalled anOSType, andWindowsdevelopers might call aFourCC). This is followed by a 32-bit signedinteger(all integers in IFF file structure arebig-endian) specifying the size of the following data (the chunk content) in bytes.[1]Because the specification includes explicit lengths for each chunk, it is possible for a parser to skip over chunks that it either can't or doesn't care to process. This structure is closely related to thetype–length–value(TLV) representation. There are predefinedgroupchunks, with type IDsFORM,LISTandCAT.[NB 1]AFORMchunk is like a record structure, containing a type ID (indicating the record type) followed by nested chunks specifying the record fields. ALISTis a factoring structure containing a series ofPROP(property) chunks plus nested group chunks to which those properties apply. ACATis just a collection of nested chunks with no special semantics. Group chunks can contain other group chunks, depending on the needs of the application. Group chunks, like their simpler counterparts, contain a length element. Skipping over a group can thus be done with a simple relativeseek operation. Chunks must begin on even file offsets, as befits the origins of IFF on the Motorola68000processor, which couldn't address quantities larger than a byte on odd addresses. Thus chunks with odd lengths will be "padded" to an even byte boundary by adding a so-called "pad byte" after their regular end. The top-level structure of an IFF file consists of exactly one of the group chunks:FORM,LISTorCAT, whereFORMis by far the most common one. Each type of chunk typically has a different internal structure, which could be numerical data, text, or raw data. It is also possible to include other IFF files as if they are chunks (note that they have the same structure: four letters followed with length), and some formats use this. There are standard chunks that could be present in any IFF file, such asAUTH(containing text with information about author of the file),ANNO(containing text with annotation, usually name of the program that created the file),NAME(containing text with name of the work in the file),VERS(containing file version),(c)(containing text with copyright information). There are also chunks that are common among a number of formats, such asCMAP, which holds color palette inILBM,ANIMandDR2Dfiles (pictures, animations and vector pictures). There are chunks that have a common name but hold different data such asBODY, which could store an image in anILBMfile and sound in an8SVXfile. And finally, there are chunks unique to their file type. Some programs that create IFF files add chunks to them with their internal data; these same files can later be read by other programs without any disruption (because their parsers could skip uninteresting chunks), which is a great advantage of IFF and similar formats.
https://en.wikipedia.org/wiki/Interchange_File_Format
The term "need to know" (alternatively spelledneed-to-know), when used bygovernmentsand other organizations (particularly those related tomilitary[1]orintelligence[2]), describes the restriction of data which is considered veryconfidentialandsensitive. Under need-to-know restrictions, even if one has all the necessary official approvals (such as asecurity clearance) to access certain information, one would not be given access to such information, or read into aclandestine operation, unless one has a specificneed to know; that is, access to the information must be necessary for one to conduct one's official duties. This term also includes anyone that the people with the knowledge deemed necessary to share it with. As with most security mechanisms, the aim is to make it difficult for unauthorized access to occur, without inconveniencing legitimate access. Need-to-know also aims to discourage "browsing" of sensitive material by limiting access to the smallest possible number of people. TheBattle of Normandyin 1944 is an example of a need-to-know restriction. Though thousands of military personnel were involved in planning the invasion, only a small number of them knew the entire scope of the operation; the rest were only informed of data needed to complete a small part of the plan. The same is true of theTrinity project, the first test of a nuclear weapon in 1945. Like other security measures, need to know can be misused by persons who wish to refuse others access to information they hold in an attempt to increase their personal power, prevent unwelcome review of their work or prevent embarrassment resulting from actions or thoughts. Need to know can also be invoked to hide illegal activities. This may be considered a necessary use, or a detrimental abuse of such a policy when considered from different perspectives. Need to know can be detrimental to workers' efficiency. Even when done in good faith, one might not be fully aware of who actually needs to know the information, resulting in inefficiencies as some people may inevitably withhold information that they require to perform their duty. The speed of computations withIBMmechanical calculatorsatLos Alamosdramatically increased after the calculators' operators were told what the numbers meant:[3] What they had to do was work on IBM machines – punching holes, numbers that they didn't understand. Nobody told them what it was. The thing was going very slowly. I said that the first thing there has to be is that these technical guys know what we're doing.Oppenheimerwent and talked to the security and got special permission so I could give a nice lecture about what we were doing, and they were all excited: "We're fighting a war! We see what it is!" They knew what the numbers meant. If the pressure came out higher, that meant there was more energy released, and so on and so on. They knew what they were doing. Complete transformation! They began to invent ways of doing it better. They improved the scheme. They worked at night. They didn't need supervising in the night; they didn't need anything. They understood everything; they invented several of the programs that we used. Thediscretionary access controlmechanisms of someoperating systemscan be used to enforce need to know.[4]In this case, the owner of a file determines whether another person should have access. Need to know is often concurrently applied withmandatory access controlschemes,[5]in which the lack of an official approval (such as a clearance) may absolutely prohibit a person from accessing the information. This is because need to know can be a subjective assessment. Mandatory access control schemes can also audit accesses, in order to determine if need to know has been violated. The term is also used in the concept ofgraphical user interfacedesign where computers are controlling complex equipment such as airplanes. In this usage, when many different pieces of data are dynamically competing for finiteuser interfacespace, safety-related messages are given priority.
https://en.wikipedia.org/wiki/Need_to_know
Theexport of cryptographyis the transfer from one country to another of devices and technology related tocryptography. In the early days of theCold War, the United States and its allies developed an elaborate series of export control regulations designed to prevent a wide range of Western technology from falling into the hands of others, particularly theEastern bloc. All export of technology classed as 'critical' required a license.CoComwas organized to coordinate Western export controls. Many countries, notably those participating in theWassenaar Arrangement, introduced restrictions. The Wassenaar restrictions were largely loosened in the late 2010s.[1][2] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Export_of_cryptography
Inalgebraic geometry, auniversal homeomorphismis amorphism of schemesf:X→Y{\displaystyle f:X\to Y}such that, for each morphismY′→Y{\displaystyle Y'\to Y}, the base changeX×YY′→Y′{\displaystyle X\times _{Y}Y'\to Y'}is ahomeomorphismof topological spaces. A morphism of schemes is a universal homeomorphism if and only if it isintegral,radicialand surjective.[1]In particular, a morphism of locally of finite type is a universal homeomorphism if and only if it isfinite, radicial and surjective. For example, anabsolute Frobenius morphismis a universal homeomorphism. Thisalgebraic geometry–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Universal_homeomorphism
The followingoutlineis provided as an overview of and topical guide to natural-language processing: natural-language processing– computer activity in which computers are entailed toanalyze, understand,alter, or generatenatural language. This includes theautomationof any or all linguistic forms, activities, or methods of communication, such asconversation, correspondence,reading,written composition,dictation,publishing,translation,lip reading, and so on. Natural-language processing is also the name of the branch ofcomputer science,artificial intelligence, andlinguisticsconcerned with enabling computers to engage in communication using natural language(s) in all forms, including but not limited tospeech,print,writing, andsigning. Natural-language processing can be described as all of the following: The following technologies make natural-language processing possible: Natural-language processing contributes to, and makes use of (the theories, tools, and methodologies from), the following fields: Natural-language generation– task of converting information from computer databases into readable human language. History of natural-language processing The followingnatural-language processingtoolkitsare notable collections ofnatural-language processingsoftware. They are suites oflibraries,frameworks, andapplicationsfor symbolic, statistical natural-language and speech processing. Chatterbot– a text-based conversationagentthat can interact with human users through some medium, such as aninstant messageservice. Some chatterbots are designed for specific purposes, while others converse with human users on a wide range of topics.
https://en.wikipedia.org/wiki/Outline_of_natural_language_processing
This is an index to notableprogramming languages, in current or historical use. Dialects ofBASIC(which havetheir own page),esoteric programming languages, andmarkup languagesare not included. A programming language does not need to beimperativeorTuring-complete, but must beexecutableand so does not includemarkup languagessuch asHTMLorXML, but does includedomain-specific languagessuch asSQLand its dialects.
https://en.wikipedia.org/wiki/List_of_programming_languages
Human physical appearanceis the outwardphenotypeor look of human beings. There are functionally infinite variations in human phenotypes, though society reduces the variability to distinct categories. The physical appearance of humans, in particular those attributes which are regarded as important forphysical attractiveness, are believed byanthropologiststo affect the development of personality significantly andsocial relations. Many humans are acutely sensitive to their physical appearance.[1]Some differences in human appearance aregenetic, others are the result ofage,lifestyleordisease, and many are the result of personaladornment. Some people have linked some differences withethnicity, such as skeletal shape,prognathismor elongated stride. Different cultures place different degrees of emphasis on physical appearance and its importance to social status and other phenomena. Various aspects are considered relevant to the physical appearance of humans. Humans are distributed across the globe except for Antarctica and form a variable species. In adults, the average weight varies from around 40 kg (88 pounds) for the smallest and most lightly built tropical people to around 80 kg (176 pounds) for the heavier northern peoples.[2]Size also varies between the sexes, with thesexual dimorphismin humans being more pronounced than that ofchimpanzees, but less than the dimorphism found in gorillas.[3]The colouration of skin, hair and eyes also varies considerably, with darker pigmentation dominating in tropical climates and lighter in polar regions. The following are non-exhaustive lists of causes and kinds of variations which are completely or partially unintentional. Examples of unintentional causes ofvariationin body appearance: Examples of generalanatomicaloranthropometricvariations: Examples of variations of specific body parts: There are alsobody and skin unconventional variationssuch asamputationsorscars.
https://en.wikipedia.org/wiki/Human_appearance
AnIrish bullis a ludicrous, incongruent orlogicallyabsurd statement, generally unrecognized as such by its author. The inclusion of the epithetIrishis a late addition.[1] John Pentland Mahaffy, Provost of Trinity College, Dublin, observed, "an Irish bull is always pregnant", i.e. with truthful meaning.[2]The "father" of the Irish bull is often said to be SirBoyle Roche,[3]who once asked "Why should we put ourselves out of our way to do anything for posterity, for what has posterity ever done for us?".[4]Roche may have beenSheridan'smodel forMrs Malaprop.[5] The derivation of "bull" in this sense is unclear. It may be related toOld Frenchboul"fraud, deceit, trickery",Icelandicbull"nonsense",Middle Englishbull"falsehood", or the verbbull"befool, mock, cheat".[6] As the Oxford English Dictionary points out, the epithet "Irish" is a more recent addition, the original wordbullfor such nonsense having been traced back at least to the early 17th century.[1]By the late 19th century the expressionIrish bullwas well known, but writers were expressing reservations such as: "But it is a cruel injustice to poor Paddy to speak of the genuine 'bull' as something distinctly Irish, when countless examples of the same kind of blunder, not a whit less startling, are to be found elsewhere." The passage continues, presenting Scottish, English and French specimens in support.[7]
https://en.wikipedia.org/wiki/Irish_bull
Efferent couplingis acouplingmetricinsoftware development. It measures the number ofdata typesaclassknows about. This includesinheritance, interface implementation, parameter types, variable types, andexceptions. This has also been referred to byRobert C. Martinas the Fan-out stability metric which in his book Clean Architecture he describes as Outgoing dependencies. This metric identifies the number of classes inside this component that depend on classes outside the component.[1] This metric is often used to calculate instability of a component insoftware architectureasI= Fan-out / (Fan-in + Fan-out). This metric has a range [0,1].I= 0 is maximally stable whileI= 1 is maximally unstable. Thisprogramming language theoryortype theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Efferent_coupling
This is alist oftransformsinmathematics. These transforms have a continuous frequency domain:
https://en.wikipedia.org/wiki/List_of_transforms
Thetimeline of historic inventionsis a chronological list of particularly significant technologicalinventionsand theirinventors, where known.[a]This page lists nonincremental inventions that are widely recognized by reliable sources as having had a direct impact on the course of history that was profound, global, and enduring. The dates in this article make frequent use of theunits mya and kya, which refer to millions and thousands of years ago, respectively. The dates listed in this section refer to the earliest evidence of an invention found and dated byarchaeologists(or in a few cases, suggested by indirect evidence). Dates are often approximate and change as more research is done, reported and seen. Older examples of any given technology are often found. The locations listed are for the site where the earliest solid evidence has been found, but especially for the earlier inventions, there is little certainty how close that may be to where the invention took place. The Lower Paleolithic period lasted over 3 million years, during which there many human-like speciesevolvedincluding toward the end of this period,Homo sapiens. The original divergence between humans andchimpanzeesoccurred 13 (Mya), however interbreeding continued until as recently as 4 Ma, with the first species clearly belonging to the human (and not chimpanzee) lineage beingAustralopithecus anamensis. Some species are controversial among paleoanthropologists, who disagree whether they are species on their own or not. HereHomo ergasteris included underHomo erectus, whileHomo rhodesiensisis included underHomo heidelbergensis. During this period theQuaternary glaciationbegan (about 2.58 million years ago), and continues to today. It has been anice age, withcycles of 40–100,000 yearsalternating between long, cold, more glaciated periods, and shorter warmer periods –interglacialepisodes. The evolution ofearly modern humansaround 300 kya coincides with the start of the Middle Paleolithic period. During this 250,000-year period, our relatedarchaic humanssuch asNeanderthalsandDenisovansbegan to spread out of Africa, joined later byHomo sapiens. Over the course of the period we see evidence of increasingly long-distance trade, religious rites, and other behavior associated withBehavioral modernity. 50 kya was long regarded as the beginning ofbehavioral modernity, which defined the Upper Paleolithic period. The Upper Paleolithic lasted nearly 40,000 years, while research continues to push the beginnings of behavioral modernity earlier into the Middle Paleolithic. Behavioral modernity is characterized by the widespread observation of religious rites, artistic expression and the appearance of tools made for purely intellectual or artistic pursuits. The end of theLast Glacial Period("ice age") and the beginning of theHolocenearound 11.7 ka coincide with theAgricultural Revolution, marking the beginning of the agricultural era, which persisted there until the industrial revolution.[94] During the Neolithic period, lasting 8400 years, stone began to be used for construction, and remained a predominant hard material for toolmaking. Copper and arsenic bronze were developed towards the end of this period, and of course the use of many softer materials such as wood, bone, and fibers continued. Domestication spread both in the sense of how many species were domesticated, and how widespread the practice became. The beginning of bronze-smelting coincides with the emergence of the first cities and of writing in the Ancient Near East and the Indus Valley. TheBronze Agestarting in Eurasia in the 4th millennia BC and ended, in Eurasia, c.1200 BC. TheLate Bronze Age collapseoccurs around 1200 BC,[220]extinguishing most Bronze-Age Near Eastern cultures, and significantly weakening the rest. This is coincident with the complete collapse of theIndus Valley Civilisation. This event is followed by the beginning of the Iron Age. We define the Iron Age as ending in 510 BC for the purposes of this article, even though the typical definition is region-dependent (e.g. 510 BC in Greece, 322 BC in India, 200 BC in China), thus being an 800-year period.[e] [394][395]
https://en.wikipedia.org/wiki/Timeline_of_historic_inventions
TheCircuit Value Problem(or Circuit Evaluation Problem) is the computational problem of computing the output of a givenBoolean circuiton a given input. The problem is complete forPunder uniformAC0reductions. Note that, in terms oftime complexity, it can be solved inlinear timesimply by atopological sort. TheBoolean Formula Value Problem(or Boolean Formula Evaluation Problem) is the special case of the problem when the circuit is a tree. The Boolean Formula Value Problem is complete forNC1.[1] The problem is closely related to theBoolean Satisfiability Problemwhich is complete forNPand its complement, thePropositional Tautology Problem, which is complete forco-NP. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Circuit_Value_Problem
Responsive computer-aided design(also simplified toresponsive design) is an approach tocomputer-aided design(CAD) that utilizes real-worldsensorsand data to modify a three-dimensional (3D) computer model. The concept is related tocyber-physical systemsthrough blurring of the virtual and physical worlds, however, applies specifically to the initial digital design of an object prior to production. The process begins with a designer creating a basic design of an object using CAD software withparametricoralgorithmicrelationships. These relationships are then linked to physical sensors, allowing them to drive changes to the CAD model within the established parameters. Reasons to allow sensors to modify a CAD model include customizing a design to fit a user'santhropometry, assisting people without CAD skills to personalize a design, or automating part of an iterative design process in similar fashion togenerative design. Once the sensors have affected the design it may then be manufactured as a one-off piece using adigital fabricationtechnology, or go through further development by a designer. Responsive computer-aided design is enabled byubiquitous computingand theInternet of Things, concepts which describe the capacity for everyday objects to contain computing and sensing technologies. It is also enabled by the ability to directly manufacture one-off objects from digital data, using technologies such as3D printingandcomputer numerical control(CNC) machines. Such digital fabrication technologies allow for customization, and are drivers of themass-customizationphenomenon.[1][2]They also provide new opportunities for consumers to participate in the design process, known asco-design. As these concepts mature, responsive design is emerging as an opportunity to reduce reliance ongraphical user interfaces(GUIs) as the only method for designers and consumers to design products,[3]aligning with claims by Golden Krishna that "the best design reduces work. The best computer is unseen. The best interaction is natural. The best interface is no interface."[4]Calls to reduce reliance on GUIs and automate some of the design process connects withMark Weiser's original vision of ubiquitous computing.[5] A variety of similar research areas are based ongesture recognition, with many projects usingmotion captureto track the physical motions of a designer and translate them into three-dimensional geometry suitable for digital fabrication.[6][7]While these share similarities to responsive design through their cyber-physical systems, they require direct intent to design an object and some level of skill. These are not considered responsive, as responsive design occurs autonomously and may even occur without the user being aware that they are designing at all. This topic has some common traits withresponsive web designandresponsive architecture, with both fields focused on systems design and adaptation based on functional conditions. Responsive computer-aided design has been used to customize fashion, and is currently an active area of research in footwear by large companies like New Balance who are looking to customize shoe midsoles using foot pressure data from customers.[8] Sound waveshave also been popular to customize 3D models and produce sculptural forms of a baby's first cries,[9]or a favorite song.[10]
https://en.wikipedia.org/wiki/Responsive_computer-aided_design
Thedelegate model of representationis a model of arepresentative democracy. In this model,constituentselect their representatives as delegates for theirconstituency. These delegates act only as a mouthpiece for the wishes of their constituency/state and have noautonomyfrom the constituency only the autonomy to vote for the actual representatives of the state. This model does not provide representatives the luxury of acting in their own conscience and is bound byimperative mandate. Essentially, the representative acts as the voice of those who are (literally) not present. Irish philosopherEdmund Burke(1729–1797) contested this model and supported the alternativetrustee model of representation.[1] The delegate model of representation is made use of in various forms ofcouncil democracyand commune democracy. Models of democratic rule making extensive use of the delegate model of representation are often labeled "delegative democracy".[2][3]However, the merging of these two terms is criticized as misleading.[4] Thispolitical sciencearticle is astub. You can help Wikipedia byexpanding it. This article aboutpolitical philosophyortheoryis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Delegate_model_of_representation
Concept miningis an activity that results in the extraction ofconceptsfromartifacts. Solutions to the task typically involve aspects ofartificial intelligenceandstatistics, such asdata miningandtext mining.[1][2]Because artifacts are typically a loosely structured sequence of words and other symbols (rather than concepts), the problem isnontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents. Traditionally, the conversion of words to concepts has been performed using athesaurus,[3]and for computational techniques the tendency is to do the same. The thesauri used are either specially created for the task, or a pre-existing language model, usually related to Princeton'sWordNet. The mappings of words to concepts[4]are oftenambiguous. Typically each word in a given language will relate to several possible concepts. Humans use context to disambiguate the various meanings of a given piece of text, where availablemachine translationsystems cannot easily infer context. For the purposes of concept mining, however, these ambiguities tend to be less important than they are with machine translation, for in large documents the ambiguities tend to even out, much as is the case with text mining. There are many techniques fordisambiguationthat may be used. Examples are linguistic analysis of the text and the use of word and concept association frequency information that may be inferred from large text corpora. Recently, techniques that base onsemantic similaritybetween the possible concepts and the context have appeared and gained interest in the scientific community. One of the spin-offs of calculating document statistics in the concept domain, rather than the word domain, is that concepts form natural tree structures based onhypernymyandmeronymy. These structures can be used to generate simple tree membership statistics, that can be used to locate any document in aEuclidean concept space. If the size of a document is also considered as another dimension of this space then an extremely efficient indexing system can be created. This technique is currently in commercial use locating similar legal documents in a 2.5 million document corpus. Standard numeric clustering techniques may be used in "concept space" as described above to locate and index documents by the inferred topic. These are numerically far more efficient than theirtext miningcousins, and tend to behave more intuitively, in that they map better to the similarity measures a human would generate.
https://en.wikipedia.org/wiki/Concept_mining
Upper and lower probabilitiesare representations ofimprecise probability. Whereasprobability theoryuses a single number, theprobability, to describe how likely an event is to occur, this method uses two numbers: the upper probability of the event and the lower probability of the event. Becausefrequentist statisticsdisallowsmetaprobabilities,[citation needed]frequentists have had to propose new solutions.Cedric SmithandArthur Dempstereach developed a theory of upper and lower probabilities.Glenn Shaferdeveloped Dempster's theory further, and it is now known asDempster–Shafer theoryor Choquet (1953). More precisely, in the work of these authors one considers in apower set,P(S){\displaystyle P(S)\,\!}, amassfunctionm:P(S)→R{\displaystyle m:P(S)\rightarrow R}satisfying the conditions In turn, a mass is associated with two non-additive continuous measures calledbeliefandplausibilitydefined as follows: In the case whereS{\displaystyle S}is infinite there can bebel{\displaystyle \operatorname {bel} }such that there is no associated mass function. See p. 36 of Halpern (2003). Probability measures are a special case of belief functions in which the mass function assigns positive mass to singletons of the event space only. A different notion of upper and lower probabilities is obtained by thelower and upper envelopesobtained from a classCof probability distributions by setting The upper and lower probabilities are also related withprobabilistic logic: see Gerla (1994). Observe also that anecessity measurecan be seen as a lower probability and apossibility measurecan be seen as an upper probability.
https://en.wikipedia.org/wiki/Upper_and_lower_probabilities
Incomputing,dataflowis a broad concept, which has various meanings depending on the application and context. In the context ofsoftware architecture, data flow relates tostream processingorreactive programming. Dataflow computingis a software paradigm based on the idea of representing computations as adirected graph, where nodes are computations and data flow along the edges.[1]Dataflow can also be calledstream processingorreactive programming.[2] There have been multiple data-flow/stream processing languages of various forms (seeStream processing). Data-flow hardware (seeDataflow architecture) is an alternative to the classicvon Neumann architecture. The most obvious example of data-flow programming is the subset known asreactive programmingwith spreadsheets. As a user enters new values, they are instantly transmitted to the next logical "actor" or formula for calculation. Distributed data flowshave also been proposed as a programming abstraction that captures the dynamics of distributed multi-protocols. The data-centric perspective characteristic of data flow programming promotes high-level functional specifications and simplifies formal reasoning about system components. Hardware architectures for dataflow was a major topic incomputer architectureresearch in the 1970s and early 1980s.Jack Dennisof theMassachusetts Institute of Technology(MIT) pioneered the field of static dataflow architectures. Designs that use conventional memory addresses as data dependency tags are called static dataflow machines. These machines did not allow multiple instances of the same routines to be executed simultaneously because the simple tags could not differentiate between them. Designs that usecontent-addressable memoryare called dynamic dataflow machines byArvind. They use tags in memory to facilitate parallelism. Data flows around the computer through the components of the computer. It gets entered from the input devices and can leave through output devices (printer etc.). A dataflow network is a network of concurrently executing processes or automata that can communicate by sending data overchannels(seemessage passing.) InKahn process networks, named afterGilles Kahn, the processes aredeterminate. This implies that each determinate process computes acontinuous functionfrom input streams to output streams, and that a network of determinate processes is itself determinate, thus computing a continuous function. This implies that the behavior of such networks can be described by a set of recursive equations, which can be solved usingfixed point theory. The movement and transformation of the data is represented by a series of shapes and lines. Dataflow can also refer to: The dictionary definition ofdataflowat Wiktionary
https://en.wikipedia.org/wiki/Data_flow
Insoftware engineeringanddevelopment, asoftware metricis a standard of measure of a degree to which asoftware systemor process possesses some property.[1][2]Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonyms. Sincequantitative measurementsare essential in all sciences, there is a continuous effort bycomputer sciencepractitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, softwaredebugging, softwareperformance optimization, and optimal personnel task assignments. Common software measurements include: As software development is a complex process, with high variance on both methodologies and objectives, it is difficult to define or measure software qualities and quantities and to determine a valid and concurrent measurement metric, especially when making such a prediction prior to the detail design. Another source of difficulty and debate is in determining which metrics matter, and what they mean.[8][9]The practical utility of software measurements has therefore been limited to the following domains: A specific measurement may target one or more of the above aspects, or the balance between them, for example as an indicator of team motivation or project performance.[10]Additionally metrics vary between static and dynamic program code, as well as for object oriented software (systems).[11][12] Some software development practitioners point out that simplistic measurements can cause more harm than good.[13]Others have noted that metrics have become an integral part of the software development process.[8]Impact of measurement on programmer psychology have raised concerns for harmful effects to performance due to stress, performance anxiety, and attempts to cheat the metrics, while others find it to have positive impact on developers value towards their own work, and prevent them being undervalued. Some argue that the definition of many measurement methodologies are imprecise, and consequently it is often unclear how tools for computing them arrive at a particular result,[14]while others argue that imperfect quantification is better than none (“You can’t control what you can't measure.”).[15]Evidence shows that software metrics are being widely used by government agencies, the US military, NASA,[16]IT consultants, academic institutions,[17]and commercial and academicdevelopment estimation software.
https://en.wikipedia.org/wiki/Software_metric
Frequent pattern discovery(orFP discovery,FP mining, orFrequent itemset mining) is part ofknowledge discovery in databases,Massive Online Analysis, anddata mining; it describes the task of finding the most frequent and relevantpatternsin large datasets.[1][2]The concept was first introduced for mining transaction databases.[3]Frequent patterns are defined as subsets (itemsets, subsequences, or substructures) that appear in a data set with frequency no less than a user-specified or auto-determined threshold.[2][4] Techniques for FP mining include: For the most part, FP discovery can be done usingassociation rule learningwith particular algorithmsEclat,FP-growthand theApriori algorithm. Other strategies include: and respective specific techniques. Implementations exist for variousmachine learningsystems or modules like MLlib forApache Spark.[5]
https://en.wikipedia.org/wiki/Frequent_pattern_mining
Inmathematicsandlogic, the term "uniqueness" refers to the property of being the one and only object satisfying a certain condition.[1]This sort ofquantificationis known asuniqueness quantificationorunique existential quantification, and is often denoted with the symbols "∃!"[2]or "∃=1". It is defined to meanthere existsan object with the given property, andall objectswith this property areequal. For example, the formal statement may be read as "there is exactly one natural numbern{\displaystyle n}such thatn−2=4{\displaystyle n-2=4}". The most common technique to prove the unique existence of an object is to first prove the existence of the entity with the desired condition, and then to prove that any two such entities (say,a{\displaystyle a}andb{\displaystyle b}) must be equal to each other (i.e.a=b{\displaystyle a=b}). For example, to show that the equationx+2=5{\displaystyle x+2=5}has exactly one solution, one would first start by establishing that at least one solution exists, namely 3; the proof of this part is simply the verification that the equation below holds: To establish the uniqueness of the solution, one would proceed by assuming that there are two solutions, namelya{\displaystyle a}andb{\displaystyle b}, satisfyingx+2=5{\displaystyle x+2=5}. That is, Then since equality is atransitive relation, Subtracting 2 from both sides then yields which completes the proof that 3 is the unique solution ofx+2=5{\displaystyle x+2=5}. In general, both existence (there existsat leastone object) and uniqueness (there existsat mostone object) must be proven, in order to conclude that there exists exactly one object satisfying a said condition. An alternative way to prove uniqueness is to prove that there exists an objecta{\displaystyle a}satisfying the condition, and then to prove that every object satisfying the condition must be equal toa{\displaystyle a}. Uniqueness quantification can be expressed in terms of theexistentialanduniversalquantifiers ofpredicate logic, by defining the formula∃!xP(x){\displaystyle \exists !xP(x)}to mean[3] which is logically equivalent to An equivalent definition that separates the notions of existence and uniqueness into two clauses, at the expense of brevity, is Another equivalent definition, which has the advantage of brevity, is The uniqueness quantification can be generalized intocounting quantification(or numerical quantification[4]). This includes both quantification of the form "exactlykobjects exist such that …" as well as "infinitely many objects exist such that …" and "only finitely many objects exist such that…". The first of these forms is expressible using ordinary quantifiers, but the latter two cannot be expressed in ordinaryfirst-order logic.[5] Uniqueness depends on a notion ofequality. Loosening this to a coarserequivalence relationyields quantification of uniquenessup tothat equivalence (under this framework, regular uniqueness is "uniqueness up to equality"). This is calledessentially unique. For example, many concepts incategory theoryare defined to be unique up toisomorphism. The exclamation mark!{\displaystyle !}can be also used as a separate quantification symbol, so(∃!x.P(x))↔((∃x.P(x))∧(!x.P(x))){\displaystyle (\exists !x.P(x))\leftrightarrow ((\exists x.P(x))\land (!x.P(x)))}, where(!x.P(x)):=(∀a∀b.P(a)∧P(b)→a=b){\displaystyle (!x.P(x)):=(\forall a\forall b.P(a)\land P(b)\rightarrow a=b)}. E.g. it can be safely used in thereplacement axiom, instead of∃!{\displaystyle \exists !}.
https://en.wikipedia.org/wiki/Uniqueness_quantifier
In mathematics, for example in the study of statistical properties ofgraphs, anull modelis a type of random object that matches one specific object in some of its features, or more generally satisfies a collection of constraints, but which is otherwise taken to be an unbiasedly random structure. The null model is used as a term of comparison, to verify whether the object in question displays some non-trivial features (properties that wouldn't be expected on the basis of chance alone or as a consequence of the constraints), such ascommunity structurein graphs. An appropriate null model behaves in accordance with a reasonablenull hypothesisfor the behavior of the system under investigation. One null model of utility in the study ofcomplex networksis that proposed byNewmanandGirvan, consisting of a randomized version of an original graphG{\displaystyle G}, produced through edges being rewired at random, under the constraint that the expected degree of each vertex matches the degree of the vertex in the original graph.[1] The null model is the basic concept behind the definition ofmodularity, a function which evaluates the goodness of partitions of a graph into clusters. In particular, given a graphG{\displaystyle G}and a specific community partitionσ:V(G)→{1,...,b}{\displaystyle \sigma :V(G)\rightarrow \{1,...,b\}}(an assignment of a community-indexσ(v){\displaystyle \sigma (v)}(here taken as an integer from1{\displaystyle 1}tob{\displaystyle b}) to each vertexv∈V(G){\displaystyle v\in V(G)}in the graph), the modularity measures the difference between the number of links from/to each pair of communities, from that expected in a graph that is completely random in all respects other than the set of degrees of each of the vertices (thedegree sequence). In other words, the modularity contrasts the exhibited community structure inG{\displaystyle G}with that of a null model, which in this case is theconfiguration model(the maximally random graph subject to a constraint on the degree of each vertex). Thisgraph theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Null_model
Inmathematics,Grothendieck's six operations, named afterAlexander Grothendieck, is a formalism inhomological algebra, also known as thesix-functor formalism.[1]It originally sprang from the relations inétale cohomologythat arise from amorphismofschemesf:X→Y. The basic insight was that many of the elementary facts relating cohomology onXandYwere formal consequences of a small number of axioms. These axioms hold in many cases completely unrelated to the original context, and therefore the formal consequences also hold. The six operations formalism has since been shown to apply to contexts such asD-modulesonalgebraic varieties,sheavesonlocally compact topological spaces, andmotives. The operations are sixfunctors. Usually these are functors betweenderived categoriesand so are actually left and rightderived functors. The functorsf∗{\displaystyle f^{*}}andf∗{\displaystyle f_{*}}form anadjoint functorpair, as dof!{\displaystyle f_{!}}andf!{\displaystyle f^{!}}.[2]Similarly, internal tensor product is left adjoint to internal Hom. Letf:X→Ybe a morphism of schemes. The morphismfinduces several functors. Specifically, it givesadjoint functorsf∗{\displaystyle f^{*}}andf∗{\displaystyle f_{*}}between the categories of sheaves onXandY, and it gives the functorf!{\displaystyle f_{!}}of direct image with proper support. In thederived category,Rf!admits a right adjointf!{\displaystyle f^{!}}. Finally, when working with abelian sheaves, there is a tensor product functor ⊗ and an internal Hom functor, and these are adjoint. The six operations are the corresponding functors on the derived category:Lf*,Rf*,Rf!,f!,⊗L, andRHom. Suppose that we restrict ourselves to a category ofℓ{\displaystyle \ell }-adictorsion sheaves, whereℓ{\displaystyle \ell }is coprime to the characteristic ofXand ofY. InSGA4 III, Grothendieck andArtinproved that iffissmoothofrelative dimensiond, thenLf∗{\displaystyle Lf^{*}}is isomorphic tof!(−d)[−2d], where(−d)denotes thedth inverseTate twistand[−2d]denotes a shift in degree by−2d. Furthermore, suppose thatfisseparatedandof finite type. Ifg:Y′ →Yis another morphism of schemes, ifX′denotes the base change ofXbyg, and iff′ andg′ denote the base changes offandgbygandf, respectively, then there exist natural isomorphisms: Again assuming thatfis separated and of finite type, for any objectsMin the derived category ofXandNin the derived category ofY, there exist natural isomorphisms: Ifiis a closed immersion ofZintoSwith complementary open immersionj, then there is a distinguished triangle in the derived category: where the first two maps are the counit and unit, respectively, of the adjunctions. IfZandSareregular, then there is an isomorphism: where1Zand1Sare the units of the tensor product operations (which vary depending on which category ofℓ{\displaystyle \ell }-adic torsion sheaves is under consideration). IfSis regular andg:X→S, and ifKis an invertible object in the derived category onSwith respect to⊗L, then defineDXto be the functorRHom(—,g!K). Then, for objectsMandM′ in the derived category onX, the canonical maps: are isomorphisms. Finally, iff:X→Yis a morphism ofS-schemes, and ifMandNare objects in the derived categories ofXandY, then there are natural isomorphisms:
https://en.wikipedia.org/wiki/Six_operations
Inmathematics, therotation numberis aninvariantofhomeomorphismsof thecircle. It was first defined byHenri Poincaréin 1885, in relation to theprecessionof theperihelionof aplanetary orbit. Poincaré later proved a theorem characterizing the existence ofperiodic orbitsin terms ofrationalityof the rotation number. Suppose thatf:S1→S1{\displaystyle f:S^{1}\to S^{1}}is an orientation-preservinghomeomorphismof thecircleS1=R/Z.{\displaystyle S^{1}=\mathbb {R} /\mathbb {Z} .}Thenfmay beliftedto ahomeomorphismF:R→R{\displaystyle F:\mathbb {R} \to \mathbb {R} }of the real line, satisfying for every real numberxand every integerm. Therotation numberoffis defined in terms of theiteratesofF: Henri Poincaréproved that the limit exists and is independent of the choice of the starting pointx. The liftFis unique modulo integers, therefore the rotation number is a well-defined element of⁠R/Z.{\displaystyle \mathbb {R} /\mathbb {Z} .}⁠Intuitively, it measures the average rotation angle along theorbitsoff. Iff{\displaystyle f}is a rotation by2πN{\displaystyle 2\pi N}(where0<N<1{\displaystyle 0<N<1}), then and its rotation number isN{\displaystyle N}(cf.irrational rotation). The rotation number is invariant undertopological conjugacy, and even monotone topologicalsemiconjugacy: iffandgare two homeomorphisms of the circle and for a monotone continuous maphof the circle into itself (not necessarily homeomorphic) thenfandghave the same rotation numbers. It was used by Poincaré andArnaud Denjoyfor topological classification of homeomorphisms of the circle. There are two distinct possibilities. The rotation number iscontinuouswhen viewed as a map from the group of homeomorphisms (withC0topology) of the circle into the circle.
https://en.wikipedia.org/wiki/Rotation_number
Reputation capitalis thequantitativemeasure of some entity's reputational value in some context – acommunityormarketplace.[citation needed]In the world ofWeb 2.0, what is increasingly valuable is trying to measure the effects ofcollaborationand contribution to community.[citation needed]Reputation capital is often seen as a form of non-cashremunerationfor their efforts, and generally generatesrespectwithin the community or marketplace where the capital is generated. For a business, reputation capital is the sum of the value of all corporateintangible assets, which include: business processes,patents,trademarks; reputations forethicsandintegrity;quality,safety,sustainability,security, andresilience.[3] Delivering functional and social expectations of the public on the one hand and manage to build a unique identity on the other hand creates trust and this trust builds the informal framework of a company. This framework provides "return in cooperation" and produces Reputation Capital. A positive reputation will secure a company or organisation long-term competitive advantages. The higher the Reputation Capital, the less the costs for supervising and exercising control.[4] Reputation capital is a corporate asset that can be managed, accumulated and traded in for trust, legitimisation of a position ofpowerand social recognition, a premium price for goods and services offered, a stronger willingness amongshareholdersto hold on to shares in times of crisis, or a stronger readiness to invest in the company'sstock.[4]
https://en.wikipedia.org/wiki/Reputation_capital
AnXML transformation languageis aprogramming languagedesigned specifically to transform aninputXMLdocument into anoutputdocument which satisfies some specific goal. There are two special cases of transformation: AsXML to XMLtransformation outputs an XML document,XML to XMLtransformation chains formXML pipelines. TheXML (EXtensible Markup Language) to Datatransformation contains some important cases. The most notable one isXML to HTML (HyperText Markup Language), as anHTMLdocumentis notan XML document. The earliest transformation languages predate the advent of XML as an SGML profile, and thus accept input in arbitrarySGMLrather than specifically XML. These include the SGML-to-SGMLlink process definition(LPD) format defined as part of the SGML standard itself; in SGML (but not XML), the LPD file can be referenced from the document itself by aLINKTYPEdeclaration, similarly to theDOCTYPEdeclarationused for aDTD.[1]Other such transformation languages, addressing some of the deficiencies of LPDs, includeDocument Style Semantics and Specification Language(DSSSL) andOmniMark.[2]Newer transformation languages tend to target XML specifically, and thus only accept XML, not arbitrary SGML.
https://en.wikipedia.org/wiki/XML_transformation_language
Avigesimal(/vɪˈdʒɛsɪməl/vij-ESS-im-əl) orbase-20(base-score) numeral system is based ontwenty(in the same way in which thedecimal numeral systemis based onten).Vigesimalis derived from the Latin adjectivevicesimus, meaning 'twentieth'. In a vigesimalplacesystem, twenty individual numerals (or digit symbols) are used, ten more than in the decimal system. One modern method of finding the extra needed symbols is to writetenas the letter A, or A20, where the20meansbase20, to writenineteenas J20, and the numbers between with the corresponding letters of the alphabet. This is similar to the commoncomputer-sciencepractice of writinghexadecimalnumerals over 9 with the letters "A–F". Another less common method skips over the letter "I", in order to avoid confusion between I20aseighteenandone, so that the number eighteen is written as J20, and nineteen is written as K20. The number twenty is written as 1020. According to this notation: In the rest of this article below, numbers are expressed in decimal notation, unless specified otherwise. For example,10meansten,20meanstwenty. Numbers in vigesimal notation use the convention that I means eighteen and J means nineteen. As 20 is divisible by two and five and is adjacent to 21, the product of three and seven, thus covering the first four prime numbers, many vigesimal fractions have simple representations, whether terminating or recurring (although thirds are more complicated than in decimal, repeating two digits instead of one). In decimal, dividing by three twice (ninths) only gives one digit periods (⁠1/9⁠= 0.1111.... for instance) because 9 is the number below ten. 21, however, the number adjacent to 20 that is divisible by 3, is not divisible by 9. Ninths in vigesimal have six-digit periods. As 20 has the same prime factors as 10 (two and five), a fraction will terminate in decimalif and only ifit terminates in vigesimal. The prime factorization of twenty is 22× 5, so it is not aperfect power. However, its squarefree part, 5, is congruent to 1 (mod 4). Thus, according toArtin's conjecture on primitive roots, vigesimal has infinitely manycyclicprimes, but the fraction of primes that are cyclic is not necessarily ~37.395%. An UnrealScript program that computes the lengths of recurring periods of various fractions in a given set of bases found that, of the first 15,456 primes, ~39.344% are cyclic in vigesimal. Many cultures that use a vigesimal system count in fives to twenty, then count twenties similarly. Such a system is referred to asquinary-vigesimalby linguists. Examples includeGreenlandic,Iñupiaq,Kaktovik,Maya,Nunivak Cupʼig, andYupʼiknumerals.[1][2][3] Vigesimal systems are common in Africa, for example inYoruba.[4]While the Yoruba number system may be regarded as a vigesimal system, it is complex.[further explanation needed] There is some evidence of base-20 usage in theMāori languageof New Zealand with the suffixhoko-(i.e.hokowhitu,hokotahi).[citation needed] In several European languages likeFrenchandDanish,20is used as a base, at least with respect to the linguistic structure of the names of certain numbers (though a thoroughgoing consistent vigesimal system, based on the powers 20, 400, 8000 etc., is not generally used). Open Location Codeuses a word-safe version of base 20 for itsgeocodes. The characters in this alphabet were chosen to avoid accidentally forming words. The developers scored all possible sets of 20 letters in 30 different languages for likelihood of forming words, and chose a set that formed as few recognizable words as possible.[16]The alphabet is also intended to reduce typographical errors by avoiding visually similar digits, and is case-insensitive. This table shows theMaya numeralsand thenumber namesinYucatec Maya,Nahuatlin modern orthography and inClassical Nahuatl.
https://en.wikipedia.org/wiki/Vigesimal
Insoftware testing,monkey testingis a technique where the user tests the application or system by providingrandominputs and checking the behavior, or seeing whether the application or system will crash. Monkey testing is usually implemented as random, automatedunit tests. While the source of the name "monkey" is uncertain, it is believed by some that the name has to do with theinfinite monkey theorem,[1]which states that a monkey hitting keys atrandomon atypewriter keyboardfor an infinite amount of time willalmost surelytype a given text, such as the complete works ofWilliam Shakespeare. Some others believe that the name comes from theclassic Mac OSapplication "The Monkey" developed bySteve Cappsprior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs inMacPaint.[2] Monkey Testing is also included inAndroid Studioas part of the standard testing tools forstress testing.[3] Monkey testing can be categorized intosmart monkey testsordumb monkey tests. Smart monkeys are usually identified by the following characteristics:[4] Some smart monkeys are also referred to asbrilliant monkeys,[citation needed]which perform testing as per user's behavior and can estimate the probability of certain bugs. Dumb monkeys, also known as "ignorant monkeys", are usually identified by the following characteristics:[citation needed] Monkey testing is an effective way to identify some out-of-the-box errors. Since the scenarios tested are usuallyad-hoc, monkey testing can also be a good way to perform load and stress testing. The intrinsic randomness of monkey testing also makes it a good way to find major bugs that can break the entire system. The setup of monkey testing is easy, therefore good for any application. Smart monkeys, if properly set up with an accurate state model, can be really good at finding various kinds of bugs. The randomness of monkey testing often makes the bugs found difficult or impossible to reproduce. Unexpected bugs found by monkey testing can also be challenging and time consuming to analyze. In some systems, monkey testing can go on for a long time before finding a bug. For smart monkeys, the ability highly depends on the state model provided, and developing a good state model can be expensive.[1] While monkey testing is sometimes treated the same asfuzz testing[5]and the two terms are usually used together,[6]some believe they are different by arguing that monkey testing is more about random actions while fuzz testing is more about random data input.[7]Monkey testing is also different fromad-hoc testingin that ad-hoc testing is performed without planning and documentation and the objective of ad-hoc testing is to divide the system randomly into subparts and check their functionality, which is not the case in monkey testing.
https://en.wikipedia.org/wiki/Monkey_testing
Thecomplex wavelet transform(CWT) is acomplex-valuedextension to the standarddiscrete wavelet transform(DWT). It is a two-dimensionalwavelettransform which providesmultiresolution, sparse representation, and useful characterization of the structure of an image. Further, it purveys a high degree of shift-invariance in its magnitude, which was investigated in.[1]However, a drawback to this transform is that it exhibits2d{\displaystyle 2^{d}}(whered{\displaystyle d}is the dimension of the signal being transformed) redundancy compared to a separable (DWT). The use of complex wavelets in image processing was originally set up in 1995 by J.M. Lina and L. Gagnon[2]in the framework of the Daubechies orthogonal filters banks.[3]It was then generalized in 1997 byNick Kingsbury[4][5][6]ofCambridge University. In the area of computer vision, by exploiting the concept of visual contexts, one can quickly focus on candidate regions, where objects of interest may be found, and then compute additional features through the CWT for those regions only. These additional features, while not necessary for global regions, are useful in accurate detection and recognition of smaller objects. Similarly, the CWT may be applied to detect the activated voxels of cortex and additionally thetemporal independent component analysis(tICA) may be utilized to extract the underlying independent sources whose number is determined by Bayesian information criterion[1][permanent dead link]. Thedual-tree complex wavelet transform(DTCWT) calculates the complex transform of a signal using two separate DWT decompositions (treeaand treeb). If the filters used in one are specifically designed different from those in the other it is possible for one DWT to produce the real coefficients and the other the imaginary. This redundancy of two provides extra information for analysis but at the expense of extra computational power. It also provides approximateshift-invariance(unlike the DWT) yet still allows perfect reconstruction of the signal. The design of the filters is particularly important for the transform to occur correctly and the necessary characteristics are:
https://en.wikipedia.org/wiki/Complex_wavelet_transform
Thesuffixologyis commonly used in the English language to denote a field of study. Theologyending is a combination of the letteropluslogyin which the letterois used as aninterconsonantalletter which, forphonologicalreasons, precedes themorphemesuffixlogy.[1]Logyis asuffixin the English language, used with words originally adapted fromAncient Greekending in-λογία(-logia).[2] English names for fields of study are usually created by taking aroot(the subject of the study) and appending the suffixlogyto it with the interconsonantaloplaced in between (with an exception explained below). For example, the worddermatologycomes from the rootdermatopluslogy.[3]Sometimes, anexcrescence, the addition of a consonant, must be added to avoid poor construction of words. There are additional uses for the suffix such as to describe a subject rather than the study of it (e.g.technology). The suffix is often humorously appended to other English words to createnonce words. For example,stupidologywould refer to the study of stupidity;beerologywould refer to the study of beer.[1] Not all scientific studies are suffixed withology. When the root word ends with the letter "L" or a vowel, exceptions occur. For example, the study ofmammalswould take the root wordmammaland appendologyto it resulting inmammalologybut because of its final letter being an "L", it instead createsmammalogy. There are exceptions for this exception too. For example, the wordangelologywith the root wordangel, ends in an "L" but is not spelledangelogyaccording to the "L" rule.[4][5] The terminal-logyis used to denote a discipline. These terms often utilize the suffix-logistor-ologistto describe one who studies the topic. In this case, the suffixologywould be replaced withologist. For example, one who studiesbiologyis called abiologist. This list of words contains all words that end inology. It includes words that denote a field of study and those that do not, as well as common misspelled words that do not end inologybut are often written as such. 3. Interdisciplinary branch of the humanities that addresses the language, costume, literature, art, culture, and history ofAlbanians. 2. phycology 1. brachylogy 5. lexicology 1. ecology 1. physiognomy 2. symptomatology 1. symbolology The study of types. [273][274][275][276][277][278][279][280][281][282][283][284][285][286][287][288]
https://en.wikipedia.org/wiki/List_of_words_ending_in_ology
Areverse connectionis usually used to bypassfirewallrestrictions onopen ports.[1]A firewall usually blocks incoming connections on closed ports, but does not block outgoingtraffic. In a normal forward connection, aclientconnects to aserverthrough the server'sopen port, but in the case of a reverse connection, the client opens the port that the server connects to.[2]The most common way a reverse connection is used is to bypassfirewallandroutersecurity restrictions.[3] For example, abackdoorrunning on a computer behind a firewall that blocks incoming connections can easily open an outbound connection to a remote host on the Internet. Once the connection is established, the remote host can send commands to the backdoor.Remote administration tools (RAT)that use a reverse connection usually sendSYNpackets to the client'sIP address. The client listens for these SYN packets and accepts the desired connections. If a computer is sending SYN packets or is connected to the client's computer, the connections can be discovered by using the netstat command or a common port listener like “Active Ports”. If the Internet connection is closed down and an application still tries to connect to remote hosts it may be infected with malware.Keyloggersand other malicious programs are harder to detect once installed, because they connect only once per session. Note that SYN packets by themselves are not necessarily a cause for alarm, as they are a standard part of all TCP connections. There are honest uses for using reverse connections, for example to allow hosts behind a NAT firewall to be administered remotely. These hosts do not normally have public IP addresses, and so must either have ports forwarded at the firewall, or open reverse connections to a central administration server. Thiscomputer networkingarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Reverse_connection
Inmathematics,Nevanlinna's criterionincomplex analysis, proved in 1920 by the Finnish mathematicianRolf Nevanlinna, characterizesholomorphicunivalent functionson theunit diskwhich arestarlike. Nevanlinna used this criterion to prove theBieberbach conjecturefor starlike univalent functions. A univalent functionhon the unit disk satisfyingh(0) = 0 andh'(0) = 1 is starlike, i.e. has image invariant under multiplication by real numbers in [0,1], if and only ifzh′(z)/h(z){\displaystyle zh^{\prime }(z)/h(z)}has positive real part for |z| < 1 and takes the value 1 at 0. Note that, by applying the result toa•h(rz), the criterion applies on any disc |z| < r with only the requirement thatf(0) = 0 andf'(0) ≠ 0. Leth(z) be a starlike univalent function on |z| < 1 withh(0) = 0 andh'(0) = 1. Fort< 0, define[1] a semigroup of holomorphic mappings ofDinto itself fixing 0. Moreoverhis theKoenigs functionfor the semigroupft. By theSchwarz lemma, |ft(z)| decreases astincreases. Hence But, settingw=ft(z), where Hence and so, dividing by |w|2, Taking reciprocals and lettingtgo to 0 gives for all |z| < 1. Since the left hand side is aharmonic function, themaximum principleimplies the inequality is strict. Conversely if has positive real part andg(0) = 1, thenhcan vanish only at 0, where it must have a simple zero. Now Thus asztraces the circlez=reiθ{\displaystyle z=re^{i\theta }}, the argument of the imageh(reiθ){\displaystyle h(re^{i\theta })}increases strictly. By theargument principle, sinceh{\displaystyle h}has a simple zero at 0, it circles the origin just once. The interior of the region bounded by the curve it traces is therefore starlike. Ifais a point in the interior then the number of solutionsN(a) ofh(z)=awith |z| <ris given by Since this is an integer, depends continuously onaandN(0) = 1, it is identically 1. Sohis univalent and starlike in each disk |z| <rand hence everywhere. Constantin Carathéodoryproved in 1907 that if is a holomorphic function on the unit diskDwith positive real part, then[2][3] In fact it suffices to show the result withgreplaced bygr(z) =g(rz) for anyr< 1 and then pass to the limitr= 1. In that casegextends to a continuous function on the closed disc with positive real part and bySchwarz formula Using the identity it follows that so defines a probability measure, and Hence Let be a univalent starlike function in |z| < 1.Nevanlinna (1921)proved that In fact by Nevanlinna's criterion has positive real part for |z|<1. So by Carathéodory's lemma On the other hand gives the recurrence relation wherea1= 1. Thus so it follows by induction that
https://en.wikipedia.org/wiki/Nevanlinna%27s_criterion
Incomputational complexity, problems that are in thecomplexity classNPbut are neither in the classPnorNP-completeare calledNP-intermediate, and the class of such problems is calledNPI.Ladner's theorem, shown in 1975 byRichard E. Ladner,[1]is a result asserting that, ifP ≠ NP, then NPI is not empty; that is, NP contains problems that are neither in P nor NP-complete. Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty. Under the assumption that P ≠ NP, Ladner explicitly constructs a problem in NPI, although this problem is artificial and otherwise uninteresting. It is an open question whether any "natural" problem has the same property:Schaefer's dichotomy theoremprovides conditions under which classes of constrained Boolean satisfiability problems cannot be in NPI.[2][3]Some problems that are considered good candidates for being NP-intermediate are thegraph isomorphism problem, anddecision versionsoffactoringand thediscrete logarithm. Under theexponential time hypothesis, there exist natural problems that requirequasi-polynomial time, and can be solved in that time, including finding a large disjoint set ofunit disksfrom a given set of disks in thehyperbolic plane,[4]and finding a graph with few vertices that is not aninduced subgraphof a given graph.[5]The exponential time hypothesis also implies that no quasi-polynomial-time problem can be NP-complete, so under this assumption these problems must be NP-intermediate.
https://en.wikipedia.org/wiki/Ladner%27s_theorem
Incontrol theory, thecoefficient diagram method(CDM) is analgebraicapproach applied to apolynomialloop in theparameter space. A special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information and as the criterion of good design.[1]The performance of the closed-loop system is monitored by the coefficient diagram. The most considerable advantages of CDM can be listed as follows:[2] It is usually required that the controller for a given plant should be designed under some practical limitations. The controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability andtime responserequirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficientdampingof the disturbance effects and the low economic property.[7] Although the main principles of CDM have been known since the 1950s,[8][9][10]the first systematic method was proposed byShunji Manabe.[11]He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the classical and modern control techniques are integrated with the basic principles of this method, which is derived by making use of the previous experience and knowledge of the controller design. Thus, an efficient and fertile control method has appeared as a tool with which control systems can be designed without needing much experience and without confronting many problems. Many control systems have been designed successfully using CDM.[12][13]It is very easy to design a controller under the conditions of stability, time domain performance and robustness. The close relations between these conditions and coefficients of the characteristic polynomial can be simply determined. This means that CDM is effective not only for control system design but also for controller parameters tuning. .
https://en.wikipedia.org/wiki/Coefficient_diagram_method
Agraph reduction machineis a special-purposecomputerbuilt to performcombinatorcalculations bygraph reduction. Examples include the SKIM ("S-K-I machine") computer, built at theUniversity of Cambridge Computer Laboratory,[1]the multiprocessor GRIP ("Graph Reduction In Parallel") computer, built atUniversity College London,[2][3]and the Reduceron, which was implemented on anFPGAwith the single purpose of executingHaskell.[4][5] Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Graph_reduction_machine
Contrastive Hebbian learningis a biologically plausible form ofHebbian learning. It is based on the contrastive divergence algorithm, which has been used to train a variety of energy-based latent variable models.[1] In 2003, contrastive Hebbian learning was shown to be equivalent in power to thebackpropagationalgorithms commonly used inmachine learning.[2] Thisneurosciencearticle is astub. You can help Wikipedia byexpanding it. This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Contrastive_Hebbian_learning
Theunary numeral systemis the simplest numeral system to representnatural numbers:[1]to represent a numberN, a symbol representing 1 is repeatedNtimes.[2] In the unary system, the number0(zero) is represented by theempty string, that is, the absence of a symbol. Numbers 1, 2, 3, 4, 5, 6, ... are represented in unary as 1, 11, 111, 1111, 11111, 111111, ...[3] Unary is abijective numeral system. However, although it has sometimes been described as "base 1",[4]it differs in some important ways frompositional notations, in which the value of a digit depends on its position within a number. For instance, the unary form of a number can be exponentially longer than its representation in other bases.[5] The use oftally marksin counting is an application of the unary numeral system. For example, using the tally mark|(𝍷), the number 3 is represented as|||. InEast Asiancultures, the number 3 is represented as三, a character drawn with three strokes.[6](One and two are represented similarly.) In China and Japan, the character 正, drawn with 5 strokes, is sometimes used to represent 5 as a tally.[7][8] Unary numbers should be distinguished fromrepunits, which are also written as sequences of ones but have their usualdecimalnumerical interpretation. Additionandsubtractionare particularly simple in the unary system, as they involve little more thanstring concatenation.[9]TheHamming weightor population count operation that counts the number of nonzero bits in a sequence of binary values may also be interpreted as a conversion from unary tobinary numbers.[10]However,multiplicationis more cumbersome and has often been used as a test case for the design ofTuring machines.[11][12][13] Compared to standardpositional numeral systems, the unary system is inconvenient and hence is not used in practice for large calculations. It occurs in somedecision problemdescriptions intheoretical computer science(e.g. someP-completeproblems), where it is used to "artificially" decrease the run-time or space requirements of a problem. For instance, the problem ofinteger factorizationis suspected to require more than a polynomial function of the length of the input as run-time if the input is given inbinary, but it only needs linear runtime if the input is presented in unary.[14]However, this is potentially misleading. Using a unary input is slower for any given number, not faster; the distinction is that a binary (or larger base) input is proportional to the base 2 (or larger base) logarithm of the number while unary input is proportional to the number itself. Therefore, while the run-time and space requirement in unary looks better as function of the input size, it does not represent a more efficient solution.[15] Incomputational complexity theory, unary numbering is used to distinguishstrongly NP-completeproblems from problems that areNP-completebut not strongly NP-complete. A problem in which the input includes some numerical parameters is strongly NP-complete if it remains NP-complete even when the size of the input is made artificially larger by representing the parameters in unary. For such a problem, there exist hard instances for which all parameter values are at most polynomially large.[16] In addition to the application in tally marks, unary numbering is used as part of some data compression algorithms such asGolomb coding.[17]It also forms the basis for thePeano axiomsfor formalizing arithmetic withinmathematical logic.[18]A form of unary notation calledChurch encodingis used to represent numbers withinlambda calculus.[19] Someemailspam filterstag messages with a number ofasterisksin ane-mail headersuch asX-Spam-BarorX-SPAM-LEVEL. The larger the number, the more likely the email is considered spam. Using a unary representation instead of a decimal number lets the user search for messages with a given rating or higher. For example, searching for****yield messages with a rating of at least 4.[20]
https://en.wikipedia.org/wiki/Unary_numeral_system
"Out of left field" (also "out in left field", and simply "left field" or "leftfield") is American slang meaning "unexpected", "odd" or "strange". InSafire's Political Dictionary, columnistWilliam Safirewrites that the phrase "out of left field" means "out of the ordinary, out of touch, far out."[1]The variation "out in left field" means alternately "removed from the ordinary, unconventional" or "out of contact with reality, out of touch."[1]He opines that the term has only a tangential connection to the political left or theLeft Coast, political slang for the coastal states of the American west.[1] Popular music historianArnold Shawwrote in 1949 for theMusic Library Associationthat the term "out of left field" was first used in the idiomatic sense of "from out of nowhere" by themusic industryto refer to a song that unexpectedly performed well in the market.[2]Based on baseball lingo, a sentence such as "That was a hit out of left field" was used bysong pluggerswho promoted recordings and sheet music, to describe a song requiring no effort to sell.[2]A "rocking chair hit" was the kind of song which came "out of left field" and sold itself, allowing the song plugger to relax.[2]A 1943 article inBillboardexpands the use to describe people unexpectedly drawn to radio broadcasting: Latest twist in radio linked with the war is the exceptional number of quasi-clerical groups and individuals who have come out of left field in recent months and are trying to buy, not promote, radio time.[3] Further instances of the phrase were published in the 1940s, including inBillboardand once in a humor book titledHow to Be Poor.[4][5][6] In May 1981, Safire asked readers ofThe New York Timesto send him any ideas they had regarding the origin of the phrase "out of left field"—he did not know where it came from, and did not refer to Shaw's work.[7]On June 28, 1981, he devoted most of his Sunday column to the phrase, offering up various responses he received.[8][9]The earliest scholarly citation Safire could find was a 1961 article in the journalAmerican Speech, which defined the variation "out in left field" as meaning "disoriented, out of contact with reality."[9][10]LinguistJohn Algeotold Safire that the phrase most likely came from baseball observers rather than from baseball fans or players.[11] In 1998, American English professorRobert L. Chapman, in his bookAmerican Slang, wrote that the phrase "out of left field" was in use by 1953.[12]He did not cite Shaw's work and he did not point to printed instances of the phrase in the 1940s. Marcus Callies, an associate professor of English and philology at theUniversity of Mainzin Germany, wrote that "the precise origin is unclear and disputed", referring to Christine Ammer's conclusion inThe American Heritage Dictionary of Idioms.[13]Callies suggested that the left fielder in baseball might throw the ball tohome platein an effort to get the runner out before he scores, and that the ball, coming from behind the runner out of left field, would surprise the runner.[13] According to the 2007Concise New Partridge Dictionary of Slang and Unconventional English, the phrase came frombaseballterminology, referring to a play in which the ball is thrown from the area covered by theleft fielderto either home plate or first base, surprising the runner. Variations include "out in left field" and simply "left field".[14] At the site of the University of Illinois Medical Center in Chicago, Illinois, a 2008 plaque marks the site of the former West Side Park, where the Chicago Cubs played from 1893 to 1915. The plaque states that the location of the county hospital and its psychiatric patients just beyond left field is the origin of the phrase "way out in left field."[15]
https://en.wikipedia.org/wiki/Out_of_left_field
Build automationis the practice ofbuildingsoftware systems in a relatively unattended fashion. The build is configured to run with minimized or nosoftware developerinteraction and without using a developer's personal computer. Build automation encompasses the act of configuring thebuild systemas well the resulting system itself. Build automation encompasses both sequencing build operations vianon-interactive interfacetools and running builds on a sharedserver.[1] Build automation toolsallow for sequencing the tasks of building software via a non-interactive interface. Existing tools such asMakecan be used via custom configuration file or using thecommand-line. Custom tools such asshell scriptscan also be used, although they become increasingly cumbersome as the codebase grows more complex.[2] Some tools, such asshell scripts, are task-orienteddeclarative programming. They encode sequences of commands to perform with usually minimal conditional logic. Some tools, such as Make are product-oriented. They build a product, a.k.a. target, based on configured dependencies.[3] A build server is aserversetup to runbuilds. As opposed to apersonal computer, a server allows for a more consistent and available build environment. Traditionally, a build server was a local computer dedicated as a shared resource instead of used as a personal computer. Today, there are manycloud computing,software as a service(SaaS)websitesfor building. Without a build server, developers typically rely on their personal computers for building, leading to several drawbacks, such as (but not limited to): Acontinuous integrationserver is a build server that is setup to build in a relatively frequent way – often on each codecommit. A build server may also be incorporated into anARAtool orALMtool. Typical build triggering options include: Automating the build process is a required step for implementingcontinuous integrationandcontinuous delivery(CI/CD) – all of which consideredbest practicefor software development.[4][how?] Pluses of build automation include:[5]
https://en.wikipedia.org/wiki/Build_automation
Inprobability theory, thePalm–Khintchine theorem, the work ofConny PalmandAleksandr Khinchin, expresses that a large number ofrenewal processes, not necessarilyPoissonian, when combined ("superimposed") will have Poissonian properties.[1] It is used to generalise the behaviour of users or clients inqueuing theory. It is also used in dependability and reliability modelling of computing andtelecommunications. According to Heyman and Sobel (2003),[1]the theorem states that the superposition of a large number of independent equilibrium renewal processes, each with a finite intensity, behaves asymptotically like a Poisson process: Let{Ni(t),t≥0},i=1,2,…,m{\displaystyle \{N_{i}(t),t\geq 0\},i=1,2,\ldots ,m}be independent renewal processes and{N(t),t>0}{\displaystyle \{N(t),t>0\}}be the superposition of these processes. Denote byXjm{\displaystyle X_{jm}}the time between the first and the second renewal epochs in processj{\displaystyle j}. DefineNjm(t){\displaystyle N_{jm}(t)}thej{\displaystyle j}th counting process,Fjm(t)=P(Xjm≤t){\displaystyle F_{jm}(t)=P(X_{jm}\leq t)}andλjm=1/(E((Xjm))){\displaystyle \lambda _{jm}=1/(E((X_{jm)}))}. If the following assumptions hold 1) For all sufficiently largem{\displaystyle m}:λ1m+λm+⋯+λmm=λ<∞{\displaystyle \lambda _{1m}+\lambda _{m}+\cdots +\lambda _{mm}=\lambda <\infty } 2) Givenε>0{\displaystyle \varepsilon >0}, for everyt>0{\displaystyle t>0}and sufficiently largem{\displaystyle m}:Fjm(t)<ε{\displaystyle F_{jm}(t)<\varepsilon }for allj{\displaystyle j} then the superpositionN0m(t)=N1m(t)+Nm(t)+⋯+Nmm(t){\displaystyle N_{0m}(t)=N_{1m}(t)+N_{m}(t)+\cdots +N_{mm}(t)}of the counting processes approaches a Poisson process asm→∞{\displaystyle m\to \infty }.
https://en.wikipedia.org/wiki/Palm%E2%80%93Khintchine_theorem
Line fittingis the process of constructing astraight linethat has the best fit to a series of data points. Several methods exist, considering:
https://en.wikipedia.org/wiki/Line_fitting
Intelecommunications, the(digital) cliff effectorbrick-wall effectis a sudden loss ofdigitalsignal reception. Unlikeanalog signals, which gradually fade whensignal strengthdecreases orelectromagnetic interferenceormultipathincreases, a digital signal provides data which is either perfect or non-existent at thereceivingend. It is named for a graph of reception quality versus signal quality, where the digital signal "falls off a cliff" instead of having a gradual rolloff.[1]This is an example of anEXIT chart. The phenomenon is primarily seen in broadcasting, where signal strength is liable to vary, rather than in recorded media, which generally have a good signal. However, it may be seen in significantly damaged media that is at the edge of readability. This effect can most easily be seen ondigital television, including bothsatellite TVand over-the-airterrestrial TV. Whileforward error correctionis applied to thebroadcast, when a minimum threshold of signal quality (a maximumbit error rate) is reached it is no longer enough for thedecoderto recover. The picture may break up (macroblocking), lock on afreeze frame, or go blank. Causes includerain fadeorsolar transiton satellites, andtemperature inversionsand other weather or atmospheric conditions causinganomalous propagationon the ground. Three particular issues particularly manifest the cliff effect. Firstly, anomalous conditions will cause occasional signal degradation. Secondly, if one is located in a fringe area, where the antenna is just barely strong enough to receive the signal, then usual variation in signal quality will cause relatively frequent signal degradation, and a very small change in overall signal quality can have a dramatic impact on the frequency of signal degradation – one incident per hour (not significantly affecting watchability) versus problems every few seconds or continuous problems. Thirdly, in some cases, where the signal is beyond the cliff (in unwatchable territory), viewers who were once able to receive a degraded signal from analog stations will findafter digital transitionthat there is no available signal in rural, fringe or mountainous regions.[2] The cliff effect is a particularly serious issue formobile TV, as signal quality may vary significantly, particularly if the receiver is moving rapidly, as in a car. Hierarchical modulationand coding can provide a compromise by supporting two or more streams with different robustness parameters and allowing receivers to scale back to a lower definition (usually fromHDTVtoSDTV, or possibly from SDTV toLDTV) before dropping out completely. Two-level hierarchical modulation is supported in principle by the EuropeanDVB-Tdigital terrestrial television standard.[3]However, layeredsource coding, such as provided byScalable Video Coding, is not supported. HD Radiobroadcasting, officially used only in the United States, is one system designed to have an analogfallback. Receivers are designed to immediately switch to the analog signal upon losing a lock on digital, but only as long as the tuned station operates inhybriddigital mode (the official meaning of "HD"). In the future all-digital mode, there is no analog to fall back to at the edge of the digital cliff. This applies only to the main channelsimulcast, and not to anysubchannels, because they have nothing to fall back to. It is also important for the station'sbroadcast engineerto make sure that theaudio signalissynchronizedbetween analog and digital, or the cliff effect will still cause a jump slightly forward or backward in the radio program. The cliff effect is also heard onmobile phones, where one or both sides of the conversation may break up, possibly resulting in adropped call. Other forms ofdigital radioalso suffer from this.
https://en.wikipedia.org/wiki/Cliff_effect
Inlinguistics, amarkeris a free or boundmorphemethat indicates thegrammatical functionof the marked word, phrase, or sentence. Most characteristically, markers occur ascliticsorinflectionalaffixes. Inanalytic languagesandagglutinative languages, markers are generally easily distinguished. Infusional languagesandpolysynthetic languages, this is often not the case. For example, in Latin, a highly fusional language, the wordamō("I love") is marked by suffix-ōfor indicative mood, active voice, first person, singular, present tense. Analytic languages tend to have a relatively limited number of markers. Markers should be distinguished from the linguistic concept ofmarkedness. Anunmarkedform is the basic "neutral" form of a word, typically used as its dictionarylemma, such as—in English—for nouns the singular (e.g.catversuscats), and for verbs the infinitive (e.g.to eatversuseats,ateandeaten). Unmarked forms (e.g. thenominative casein many languages) tend to be less likely to have markers, but this is not true for all languages (compareLatin). Conversely, a marked form may happen to have azero affix, like thegenitiveplural of some nouns inRussian(e.g.сапо́г). In some languages, the same forms of a marker have multiple functions, such as when used in differentcasesordeclensions(for example-īsin Latin). Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Marker_(linguistics)
SBMV Protocolis an advanced encryptedtelemetrythat uses short-burst, multi-version technology.[1] Telemetrytechnology enables “the remote measurement and reporting of information”. Telemetry is also a “highly automated communications process by which measurements are made and other data collected at remote or inaccessible points and transmitted to receiving equipment for monitoring, display, and recording.”[2][3] SBMV technology is based onquantum cryptography, "an emerging technology in which two parties may simultaneously generate shared, secretcryptographickey material using the transmission ofquantum statesoflight. The security of these transmissions is based on the inviolability of the laws ofquantum mechanicsand information-theoretically secure post-processing methods."[4] SBMV Protocolencryptsdata by quickly breaking text, numerical, and/or image data into tens of thousands of smallpacketsthat are then copied into hundreds of thousands of slightly altered versions. This technology renders interception useless because it becomes statistically impossible for the intercepting party to have enough time and computing resources to select which version is the true and correct version among millions of versions of the data.[5] SBMV Protocol was first created in 1971 forspacecraft, missile, RPV, oil rig, and chemical planttelemetryand telecommand links by mathematiciansDavid YeedaandAndrei Krolovich, who formedThe Aeorads Companyfor commercial and military aerospace applications of SBMV technology. New Methods, op cit. SBMV technology was further developed withInternet Protocolapplications atWright-Patterson Air Force Base(United States Air Force Research Laboratory) in Ohio, where defense contractor Aeorads Company refined the technology for web-based uses in aircraft, spacecraft, and missiles. A non-classified civilian version of SBMV technology was also created for chemical plants and remote oil rigs and alternative energy wind farms (primarily for offshore and very remote facilities).[6][7]
https://en.wikipedia.org/wiki/SBMV_Protocol
Inpolitics, alitmus testis a question asked of a potentialcandidatefor high office, the answer to which would determine whether the nominating official would proceed with the appointment ornomination. The expression is ametaphorbased on thelitmus testin chemistry, in which one is able to test the generalacidityof a substance, but not its exactpH. Those who must approve a nominee may also be said to apply a litmus test to determine whether the nominee will receive their vote. In these contexts, the phrase comes up most often with respect to nominations to thejudiciary. The metaphor of a litmus test has been used in American politics since the mid-twentieth century.[1]DuringUnited States presidential electioncampaigns, litmus tests the nominees might use are more fervently discussed when vacancies for theU.S. Supreme Courtappear likely. Advocates for various social ideas or policies often wrangle heatedly over what litmus test, if any, the president ought to apply when nominating a new candidate for a spot on the Supreme Court. Support for, or opposition to,abortionis one example of a common decisive factor insingle-issue politics; another might be support ofstrict constructionism. Defenders of litmus tests argue that some issues are so important that it overwhelms other concerns (especially if there are other qualified candidates that pass the test). The political litmus test is often used when appointing judges. However, this test to determine the political attitude of a nominee is not without error. Supreme Court Chief JusticeEarl Warrenwas appointed under the impression that he was conservative but his tenure was marked by liberal dissents. Today, the litmus test is used along with other methods such as past voting records when selecting political candidates. TheRepublican Liberty Caucusis opposed to litmus tests for judges, stating in their goals that they "oppose 'litmus tests' for judicial nominees who are qualified and recognize that the sole function of the courts is to interpret the Constitution. We oppose judicial amendments or the crafting of new law by any court."[2] ProfessorEugene Volokhbelieves that the legitimacy of such tests is a "tough question", and argues that they may undermine the fairness of the judiciary: Imagine a justice testifies under oath before the Senate about his views on (say) abortion, and later reaches a contrary decision [after carefully examining the arguments]. "Perjury!" partisans on the relevant side will likely cry: They'll assume the statement made with an eye towards confirmation was a lie, rather than that the justice has genuinely changed his mind. Even if no calls for impeachment follow, the rancor and contempt towards the justice would be much greater than if he had simply disappointed his backers' expectations. Faced with that danger, a justice may well feel pressured into deciding the way that he testified, and rejecting attempts at persuasion. Yet that would be a violation of the judge's duty to sincerely consider the parties' arguments.[3]
https://en.wikipedia.org/wiki/Litmus_test_(politics)
In the distribution andlogisticsof many types of products,track and traceortracking and tracingconcerns a process of determining the current and past locations (and other information) of a unique item or property.Mass serializationis the process that manufacturers go through to assign and mark each of their products with aunique identifiersuch as anElectronic Product Code(EPC) for track and trace purposes. The marking or "tagging" of products is usually completed within the manufacturing process through the use of various combinations of human readable or machine readable technologies such asDataMatrixbarcodes orRFID. The track and trace concept can be supported by means of reckoning and reporting of the position of vehicles and containers with the property of concern, stored, for example, in areal-time database. This approach leaves the task to compose a coherent depiction of the subsequent status reports. Another approach is to report the arrival or departure of the object and recording the identification of the object, the location where observed, the time, and the status. This approach leaves the task to verify the reports regarding consistency and completeness. An example of this method might be thepackage trackingprovided by shippers, such as theUnited States Postal Service,Deutsche Post,Royal Mail,United Parcel Service,AirRoad, orFedEx. The international standards organizationEPCglobalunderGS1has ratified the EPC network standards (esp. the EPC information servicesEPCISstandard) which codify the syntax and semantics forsupply chainevents and the secure method for selectively sharing supply chain events with trading partners. These standards for Tracking and Tracing have been used in successful deployments in many industries and there are now a wide range of products that are certified as being compatible with these standards. In response to a growing number ofrecallincidents (food, pharmaceutical, toys, etc.), a wave of software, hardware, consulting and systems vendors have emerged over the last few years to offer a range oftraceabilitysolutions and tools for industry.Radio-frequency identificationandbarcodesare two common technology methods used to deliver traceability.[1] RFID is synonymous with track-and-trace solutions, and has a critical role to play in supply chains. RFID is a code-carrying technology, and can be used in place of a barcode to enable non-line of sight-reading. Deployment of RFID was earlier inhibited by cost limitations but the usage is now increasing. Barcoding is a common and cost-effective method used to implement traceability at both the item and case-level. Variable data in a barcode or a numeric or alphanumeric code format can be applied to the packaging or label. The secure data can be used as a pointer to traceability information and can also correlate with production data such as time to market and product quality.[2] Packagingconvertershave a choice of three different classes of technology to print barcodes: Serialization facilitates supply chain agility: visibility into supply chain activities and the ability to take responsive action. Particular benefits include the ability to recognise and isolatecounterfeitproducts and to improve the efficiency of product recall management.[3] Consumers can access web sites to trace the origins of their purchased products or to find the status of shipments. Consumers can type a code found on an item into a search box at the tracing website and view information. This can also be done via asmartphonetaking a picture of a 2D barcode and thereby opening up a website that verifies the product (i.e. product authentication). Serialization has a significant and legally endorsed safety role in thepharmaceuticalindustry.[4]
https://en.wikipedia.org/wiki/Track_and_trace
Ingraph theory, a part of mathematics, ak-partite graphis agraphwhoseverticesare (or can be)partitionedintokdifferentindependent sets. Equivalently, it is a graph that can becoloredwithkcolors, so that no two endpoints of an edge have the same color. Whenk= 2these are thebipartite graphs, and whenk= 3they are called thetripartite graphs. Bipartite graphs may be recognized inpolynomial timebut, for anyk> 2it isNP-complete, given an uncolored graph, to test whether it isk-partite.[1]However, in some applications of graph theory, ak-partite graph may be given as input to a computation with its coloring already determined; this can happen when the sets of vertices in the graph represent different types of objects. For instance,folksonomieshave been modeled mathematically by tripartite graphs in which the three sets of vertices in the graph represent users of a system, resources that the users are tagging, and tags that the users have applied to the resources.[2] Acompletek-partite graphis ak-partite graph in which there is an edge between every pair of vertices from different independent sets. These graphs are described by notation with a capital letterKsubscripted by a sequence of the sizes of each set in the partition. For instance,K2,2,2is the complete tripartite graph of aregular octahedron, which can be partitioned into three independent sets each consisting of two opposite vertices. Acomplete multipartite graphis a graph that is completek-partite for somek.[3]TheTurán graphsare the special case of complete multipartite graphs in which each two independent sets differ in size by at most one vertex. Completek-partite graphs, complete multipartite graphs, and theircomplement graphs, thecluster graphs, are special cases ofcographs, and can be recognized in polynomial time even when the partition is not supplied as part of the input.
https://en.wikipedia.org/wiki/Multipartite_graph
Virus Bulletinis a magazine about the prevention, detection and removal ofmalwareandspam. It regularly features analyses of the latestvirusthreats, articles exploring new developments in the fight against viruses, interviews with anti-virus experts, and evaluations of currentanti-malwareproducts. Virus Bulletinwas founded in 1989[1]` as a monthly hardcopy magazine, and later distributed electronically in PDF format. The monthly publication format was discontinued in July 2014 and articles are now made available as standalone pieces on the website.[2]The magazine was originally located in theSophosheadquarters inAbingdon, Oxfordshirein the UK. It was co-founded and is owned by Jan Hruska and Peter Lammer, the co-founders of Sophos.Virus Bulletinclaims to have full editorial independence and not favour Sophos products in its tests and reviews.[3] Technical experts from anti-virus vendors have written articles for the magazine, which also conducts comparison tests of the detection rates of anti-virus software. Products which manage to detect 100% of the viruses in the wild, without false alarms, are given the VB100 award.[4] The magazine holds an annual conference (in late September or early October) for computer security professionals.[5]In recent years both magazine and conference have branched out to discuss anti-spam and other security issues as well as malware. Notable previous speakers includeMikko Hyppönen,[6]Eugene Kaspersky[7]andGraham Cluley, as well as representatives from all major anti-virus vendors.[8] Virus Bulletin was a founder member of theAnti-Malware Testing Standards Organizationand remains a member today. This computer magazine or journal-related article is astub. You can help Wikipedia byexpanding it. See tips for writing articles about magazines. Further suggestions might be found on the article'stalk page.
https://en.wikipedia.org/wiki/Virus_Bulletin
Psychographic filteringis located within a branch ofcollaborative filtering(user-based) which anticipatespreferencesbased upon information received from astatistical survey, aquestionnaire, or other forms ofsocial research.[1]The termPsychographicis derived from Psychography which is the study of associating and classifying people according to their psychological characteristics.[2]Inmarketingor social research, information received from a participant’s response is compared with other participants’ responses and the comparison of that research is designed to predict preferences based upon similarities or differences in perception.[3]The participant should be inclined to share perceptions with people who have similar preferences. Suggestions are then provided to the participant based on their predicted preferences. Psychographic filtering differs from collaborative filtering in that it classifies similar people into a specific psychographic profile where predictions of preferences are based upon that psychographic profile type.[3]Examples of psychological characteristics which determine a psychographic profile are personality,lifestyle,value system,behavior,experienceandattitude. Research data is collected and analyzed throughquantitative methods, yet the manner in which the questions are presented share similarities used withinqualitative methods. Participants respond to questions offering perceived choice. The participants’ choice is reflective of their psychological characteristics. This perceived choice (presented throughout the research method) is designed to score a participant and categorize that participant according to their respective score. The categories (psychographic profiles) used to assign people, reflect personality characteristics which the researchers can analyze and use for their particular purposes. Psychographic filtering and collaborative filtering are still within experimental stages and therefore have been not been extensively used.[3]The techniques are most effective when they are used to indicate preference for a single, constant item (i.e. a horror book written by one author) rather than recommending a composition of characteristics (i.e. a newspaper article on war) which varies in perspective from publisher to publisher.[3]For the item to be perceived in accordance with the psychographic profile, it must be defined within a specific category, opposed to being encompassing of many categories (where many preferences overlap).[3]Major problems with this type of research are whether it can be applied to items which are constantly changing in scope and updated regularly and whether people will participate sufficiently to create psychographic profiles.
https://en.wikipedia.org/wiki/Psychographic_filtering
Adisk editoris acomputer programthat allows its user to read, edit, and write raw data (atcharacterorhexadecimal,byte-levels) on disk drives (e.g.,hard disks,USB flash disksorremovable mediasuch as afloppy disks); as such, they are sometimes calledsector editors,since the read/write routines built into the electronics of most disk drives require to read/write data in chunks ofsectors(usually 512 bytes). Many disk editors can also be used to edit the contents of a running computer'smemoryor adisk image. Unlikehex editors,which are used to editfiles, adisk editorallows access to the underlying disk structures, such as themaster boot record(MBR) orGUID Partition Table(GPT),file system, anddirectories. On some operating systems (likeUnixorUnix-like) most hex editors can act as disk editors just openingblock devicesinstead of regular files. Programmers can use disk editors to understand these structures and test whether their implementation (e.g. of a file system) works correctly. Sometimes these structures are edited in order to provide examples for teaching data recovery and forensics, or in an attempt to hide data to achieve privacy or hide data from casual examiners. However, modifying such data structures gives only a weak level of protection anddata encryptionis the preferred method to achieve privacy. Some disk editors include special functions which enable more comfortable ways to edit and fix file systems or other disk specific data structures. Furthermore, some include simple file browsers that can present the disk contents for partially corrupted file systems or file systems unknown to the operating system. These features can be used for example for file recovery. Disk editors forhome computersof the 1980s were often included as part of utility software packages on floppies orcartridges. The latter had the advantage of being instantly available at power-on and after resets, instead of having to be loaded or reloaded on the same disk drive that later would hold the floppy to be edited (the majority of home computer users possessed only one floppy disk drive at that time). Having the disk editor on cartridge also helped the user avoid editing/damaging the disk editor application disk by mistake. All 1980s disk editors strove to be better thanDEBUGcontained inDOS.DEBUGcould load, edit, and write one or more sectors from afloppyorhard diskbased on theBIOS. This permitted simple disk editing tasks such as saving and restoring themaster boot recordand other critical sectors, or even changing the active (= boot) partition in the MBR. In anNTVDMunder 1993'sWindows NTDEBUGcould not access the physical drive with theMBRof the operating system and so was in essence useless as disk editor for the system drive. TheResource Kitand the support tools for some Windows NT versions containedDSKPROBE[1]as a very simple disk editor supporting the use and modification of the partition table in the MBR and related tasks. Apartition editor(also calledpartitioning utility) is a kind ofutility softwaredesigned to view, create, modify or deletedisk partitions. A disk partition is a logical segment of the storage space on a storage device. By partitioning a large device into several partitions, it is possible to isolate various types of data from one another, and allow the coexistence of two or more operating systems simultaneously. Features and capabilities of partition editors vary, but generally they can create several partition on a disk, or one contiguous partition on several disks, at the discretion of the user. They can also, shrink a partition to allow more partitions created on a storage device, or delete one and expand an adjacent partition into the available space.
https://en.wikipedia.org/wiki/Disk_editor
Radical trustis the confidence that any structuredorganization, such as agovernment,library,business,religion,[1]ormuseum, has in collaboration and empowerment withinonline communities. Specifically, it pertains to the use ofblogs,wikiandonline social networkingplatforms by organizations to cultivate relationships with an online community that then can provide feedback and direction for the organization's interest. The organization 'trusts' and uses that input in itsmanagement. One of the first appearances of the notion of radical trust appears in an info graphic outlining the base principles ofweb 2.0inTim O'Reilly's weblog post "What is Web 2.0". Radical Trust is listed as the guiding example of trusting the validity ofconsumer generated media.[2] This concept is considered to be an underlying assumption ofLibrary 2.0. The adoption of radical trust by a library would require its management let go of some of its control over the library and building an organization without an end result in mind. The direction a library would take would be based on input provided by people through online communities. These changes in the organization may merely be anecdotal in nature, making this method of organization management dramatically distinct from data-based or evidence based management.[3] In marketing, Collin Douma further describes the notion of radical trust as a key mindset required formarketersandadvertisersto enter the social media marketing space. Conventional marketing dictates and maintains control of messages to cause the greatest persuasion in consumer decisions, but Douma argued that in the social media space, brands would need to cede that control in order to build brand loyalty.[4][5][permanent dead link]
https://en.wikipedia.org/wiki/Radical_trust