text
stringlengths
21
172k
source
stringlengths
32
113
Facial motion captureis the process of electronically converting the movements of a person's face into a digital database using cameras orlaser scanners. This database may then be used to producecomputer graphics(CG),computer animationfor movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in a more realistic and nuanced computer character animation than if the animation were created manually. A facialmotion capturedatabase describes the coordinates or relative positions of reference points on the actor's face. The capture may be in two dimensions, in which case the capture process is sometimes called "expressiontracking", or in three dimensions. Two-dimensional capture can be achieved using a single camera and capture software. This produces less sophisticated tracking, and is unable to fully capture three-dimensional motions such as head rotation. Three-dimensional capture is accomplished usingmulti-camera rigsor laser marker system. Such systems are typically far more expensive, complicated, and time-consuming to use. Two predominate technologies exist: marker and marker-less tracking systems. Facial motion capture is related to body motion capture, but is more challenging due to the higher resolution requirements to detect and track subtle expressions possible fromsmall movementsof the eyes and lips. These movements are often less than a few millimeters, requiring even greater resolution and fidelity and different filtering techniques than usually used in full body capture. The additional constraints of the face also allow more opportunities for using models and rules. Facial expression captureis similar to facial motion capture. It is a process of using visual or mechanical means to manipulate computer generated characters with input from humanfaces, or torecognize emotionsfrom a user. One of the first papers discussing performance-driven animation was published byLance Williamsin 1990. There, he describes 'a means of acquiring the expressions of realfaces, and applying them to computer-generated faces'.[1] Traditional marker based systems apply up to 350 markers to the actorsfaceand track the marker movement with high resolutioncameras. This has been used on movies such asThe Polar ExpressandBeowulfto allow an actor such asTom Hanksto drive the facial expressions of several different characters. Unfortunately this is relatively cumbersome and makes the actors expressions overly driven once the smoothing and filtering have taken place. Next generation systems such asCaptiveMotionutilize offshoots of the traditional marker based system with higher levels of details. Active LED Marker technology is currently being used to drive facial animation in real-time to provide user feedback. Markerless technologies use the features of the face such asnostrils, the corners of the lips and eyes, and wrinkles and then track them. This technology is discussed and demonstrated atCMU,[2]IBM,[3]University of Manchester(where much of this started withTim Cootes,[4]Gareth Edwards and Chris Taylor) and other locations, usingactive appearance models,principal component analysis,eigen tracking,deformable surface modelsand other techniques to track the desired facial features fromframeto frame. This technology is much less cumbersome, and allows greater expression for the actor. These vision based approaches also have the ability to track pupil movement, eyelids, teeth occlusion by the lips and tongue, which are obvious problems in most computer-animated features. Typical limitations of vision based approaches are resolution and frame rate, both of which are decreasing as issues as high speed, high resolutionCMOS camerasbecome available from multiple sources. The technology for markerless face tracking is related to that in aFacial recognition system, since a facial recognition system can potentially be applied sequentially to each frame of video, resulting in face tracking. For example, the Neven Vision system[5](formerly Eyematics, now acquired by Google) allowed real-time 2D face tracking with no person-specific training; their system was also amongst the best-performing facial recognition systems in the U.S. Government's 2002 Facial Recognition Vendor Test (FRVT). On the other hand, some recognition systems do not explicitly track expressions or even fail on non-neutral expressions, and so are not suitable for tracking. Conversely, systems such asdeformable surface modelspool temporal information to disambiguate and obtain more robust results, and thus could not be applied from a single photograph. Markerless face tracking has progressed to commercial systems such asImage Metrics, which has been applied in movies such asThe Matrixsequels[6]andThe Curious Case of Benjamin Button. The latter used theMovasystem to capture a deformable facial model, which was then animated with a combination of manual and vision tracking.[7]Avatarwas another prominent performance capture movie however it used painted markers rather than being markerless.Dynamixyz[permanent dead link]is another commercial system currently in use. Markerless systems can be classified according to several distinguishing criteria: To date, no system is ideal with respect to all these criteria. For example, the Neven Vision system was fully automatic and required no hidden patterns or per-person training, but was 2D. The Face/Off system[8]is 3D, automatic, and real-time but requires projected patterns. Digital video-based methods are becoming increasingly preferred, as mechanical systems tend to be cumbersome and difficult to use. Usingdigital cameras, the input user's expressions are processed to provide the headpose, which allows the software to then find the eyes, nose and mouth. The face is initially calibrated using a neutral expression. Then depending on the architecture, the eyebrows, eyelids, cheeks, and mouth can be processed as differences from the neutral expression. This is done by looking for the edges of the lips for instance and recognizing it as a unique object. Often contrast enhancing makeup or markers are worn, or some other method to make the processing faster. Like voice recognition, the best techniques are only good 90 percent of the time, requiring a great deal of tweaking by hand, or tolerance for errors. Since computer generated characters don't actually havemuscles, different techniques are used to achieve the same results. Some animators create bones or objects that are controlled by the capture software, and move them accordingly, which when the character is rigged correctly gives a good approximation. Since faces are very elastic this technique is often mixed with others, adjusting the weights differently for theskinelasticity and other factors depending on the desired expressions. Several commercial companies are developing products that have been used, but are rather expensive.[citation needed] It is expected that this will become a majorinput devicefor computer games once the software is available in an affordable format, but the hardware and software do not yet exist, despite the research for the last 15 years producing results that are almost usable.[citation needed] The first application that got wide adoption is communication. Initially video telephony and multimedia messaging and later in 3D with mixed reality headsets. With the advance ofmachine learning, compute power and advanced sensors, especially on mobile phones, facial motion capture technology became widely available. Two notable examples are Snapchat'slensfeature and Apple's Memoji[9]that can be used to record messages with avatars or live via theFaceTimeapp. With these applications (and many other) most modern mobile phones today are capable of performing real time facial motion capture! More recently, real time facial motion capture, combined with realistic 3Davatarswere introduced to enable immersive communication inmixed reality(MR) andvirtual reality(VR).Metademonstrated their Codec Avatars to communicate via their MR headsetMeta Quest Proto record a podcast with two remote participants.[10]Apple's MR headsetApple Vision Proalso supports real-time facial motion capture that can be used with applications such asFaceTime. Real-time communication applications prioritize lowlatencyto facilitate natural conversation and ease of use, aiming to make the technology accessible to a broad audience. These considerations may limit on the possible accuracy of the motion capture.
https://en.wikipedia.org/wiki/Facial_motion_capture
A5/1is astream cipherused to provide over-the-air communicationprivacyin theGSMcellular telephonestandard. It is one of several implementations of the A5 security protocol. It was initially kept secret, but became public knowledge through leaks andreverse engineering. A number of serious weaknesses in the cipher have been identified. A5/1 is used inEuropeand the United States.A5/2was a deliberate weakening of the algorithm for certain export regions.[1]A5/1 was developed in 1987, when GSM was not yet considered for use outside Europe, andA5/2was developed in 1989. Though both were initially kept secret, the general design was leaked in 1994 and the algorithms were entirely reverse engineered in 1999 byMarc Bricenofrom a GSM telephone. In 2000, around 130 million GSM customers relied on A5/1 to protect the confidentiality of their voice communications.[citation needed] Security researcherRoss Andersonreported in 1994 that "there was a terrific row between theNATOsignal intelligence agenciesin the mid-1980s over whether GSM encryption should be strong or not. The Germans said it should be, as they shared a long border with theWarsaw Pact; but the other countries didn't feel this way, and the algorithm as now fielded is a French design."[2] A GSM transmission is organised as sequences ofbursts. In a typical channel and in one direction, one burst is sent every 4.615 milliseconds and contains 114 bits available for information. A5/1 is used to produce for each burst a 114 bit sequence ofkeystreamwhich isXORedwith the 114 bits prior to modulation. A5/1 is initialised using a 64-bitkeytogether with a publicly known 22-bit frame number. Older fielded GSM implementations using Comp128v1 for key generation, had 10 of the key bits fixed at zero, resulting in an effectivekey lengthof 54 bits. This weakness was rectified with the introduction of Comp128v3 which yields proper 64 bits keys. When operating in GPRS / EDGE mode, higher bandwidth radio modulation allows for larger 348 bits frames, andA5/3is then used in a stream cipher mode to maintain confidentiality. A5/1 is based around a combination of threelinear-feedback shift registers(LFSRs) with irregular clocking. The three shift registers are specified as follows: These degrees were not chosen at random: since the degrees of the three registers are relatively prime, the period of this generator is the product of the periods of the three registers. Thus the period of A5/1 (before repetition) is 2^64 bits (2 to the power of 64). The bits are indexed with theleast significant bit(LSB) as 0. The registers are clocked in a stop/go fashion using a majority rule. Each register has an associated clocking bit. At each cycle, the clocking bit of all three registers is examined and the majority bit is determined. A register is clocked if the clocking bit agrees with the majority bit. Hence at each step at least two or three registers are clocked, and each register steps with probability 3/4. Initially, the registers are set to zero. Then for 64 cycles, the 64-bit secret keyKis mixed in according to the following scheme: in cycle0≤i<64{\displaystyle 0\leq {i}<64}, theith key bit is added to the least significant bit of each register using XOR — Each register is then clocked. Similarly, the 22-bits of the frame number are added in 22 cycles. Then the entire system is clocked using the normal majority clocking mechanism for 100 cycles, with the output discarded. After this is completed, the cipher is ready to produce two 114 bit sequences of output keystream, first 114 for downlink, last 114 for uplink. A number of attacks on A5/1 have been published, and the AmericanNational Security Agencyis able to routinely decrypt A5/1 messages according to released internal documents.[3] Some attacks require an expensive preprocessing stage after which the cipher can be broken in minutes or seconds. Originally, the weaknesses were passive attacks using theknown plaintextassumption. In 2003, more serious weaknesses were identified which can be exploited in theciphertext-only scenario, or by an active attacker. In 2006 Elad Barkan,Eli Bihamand Nathan Keller demonstrated attacks against A5/1,A5/3, or even GPRS that allow attackers to tap GSM mobile phone conversations and decrypt them either in real-time, or at any later time. According to professor Jan Arild Audestad, at the standardization process which started in 1982, A5/1 was originally proposed to have a key length of 128 bits. At that time, 128 bits was projected to be secure for at least 15 years. It is now believed that 128 bits would in fact also still be secure until theadvent of quantum computing. Audestad, Peter van der Arend, andThomas Haugsays that the British insisted on weaker encryption, with Haug saying he was told by the British delegate that this was to allow the British secret service to eavesdrop more easily. The British proposed a key length of 48 bits, while the West Germans wanted stronger encryption to protect against East German spying, so the compromise became a key length of 54 bits.[4] The first attack on the A5/1 was proposed byRoss Andersonin 1994. Anderson's basic idea was to guess the complete content of the registers R1 and R2 and about half of the register R3. In this way the clocking of all three registers is determined and the second half of R3 can be computed.[2] In 1997, Golic presented an attack based on solving sets of linear equations which has a time complexity of 240.16(the units are in terms of number of solutions of a system of linear equations which are required). In 2000,Alex Biryukov,Adi ShamirandDavid Wagnershowed that A5/1 can becryptanalysedin real time using a time-memory tradeoff attack,[5]based on earlier work by Jovan Golic.[6]One tradeoff allows an attacker to reconstruct the key in one second from two minutes of known plaintext or in several minutes from two seconds of known plain text, but he must first complete an expensive preprocessing stage which requires 248steps to compute around 300 GB of data. Several tradeoffs between preprocessing, data requirements, attack time and memory complexity are possible. The same year,Eli BihamandOrr Dunkelmanalso published an attack on A5/1 with a total work complexity of 239.91A5/1 clockings given 220.8bits ofknown plaintext. The attack requires 32 GB of data storage after aprecomputationstage of 238.[7] Ekdahl and Johansson published an attack on the initialisation procedure which breaks A5/1 in a few minutes using two to five minutes of conversation plaintext.[8]This attack does not require a preprocessing stage. In 2004, Maximovet al.improved this result to an attack requiring "less than one minute of computations, and a few seconds of known conversation". The attack was further improved byElad BarkanandEli Bihamin 2005.[9] In 2003, Barkanet al.published several attacks on GSM encryption.[10]The first is an active attack. GSM phones can be convinced to use the much weakerA5/2cipher briefly. A5/2 can be broken easily, and the phone uses the same key as for the stronger A5/1 algorithm. A second attack on A5/1 is outlined, aciphertext-onlytime-memory tradeoff attack which requires a large amount of precomputation. In 2006,Elad Barkan,Eli Biham,Nathan Kellerpublished the full version of their 2003 paper, with attacks against A5/X сiphers. The authors claim:[11] We present a very practical ciphertext-only cryptanalysis of GSM encrypted communication, and various active attacks on the GSM protocols. These attacks can even break into GSM networks that use "unbreakable" ciphers. We first describe a ciphertext-only attack on A5/2 that requires a few dozen milliseconds of encrypted off-the-air cellular conversation and finds the correct key in less than a second on a personal computer. We extend this attack to a (more complex) ciphertext-only attack on A5/1. We then describe new (active) attacks on the protocols of networks that use A5/1, A5/3, or even GPRS. These attacks exploit flaws in the GSM protocols, and they work whenever the mobile phone supports a weak cipher such as A5/2. We emphasize that these attacks are on the protocols, and are thus applicable whenever the cellular phone supports a weak cipher, for example, they are also applicable for attacking A5/3 networks using the cryptanalysis of A5/1. Unlike previous attacks on GSM that require unrealistic information, like long known plaintext periods, our attacks are very practical and do not require any knowledge of the content of the conversation. Furthermore, we describe how to fortify the attacks to withstand reception errors. As a result, our attacks allow attackers to tap conversations and decrypt them either in real-time, or at any later time. In 2007Universities of Bochumand Kiel started a research project to create a massively parallelFPGA-based cryptographic accelerator COPACOBANA. COPACOBANA was the first commercially available solution[12]using fast time-memory trade-off techniques that could be used to attack the popular A5/1 and A5/2 algorithms, used in GSM voice encryption, as well as theData Encryption Standard(DES). It also enablesbrute force attacksagainst GSM eliminating the need of large precomputed lookup tables. In 2008, the groupThe Hackers Choicelaunched a project to develop a practical attack on A5/1. The attack requires the construction of a large look-up table of approximately 3 terabytes. Together with the scanning capabilities developed as part of the sister project, the group expected to be able to record any GSM call or SMS encrypted with A5/1, and within about 3–5 minutes derive the encryption key and hence listen to the call and read the SMS in clear. But the tables weren't released.[13] A similar effort, the A5/1 Cracking Project, was announced at the2009 Black Hat security conferenceby cryptographersKarsten Nohland Sascha Krißler. It created the look-up tables usingNvidiaGPGPUsvia apeer-to-peerdistributed computingarchitecture. Starting in the middle of September 2009, the project ran the equivalent of 12 Nvidia GeForce GTX 260. According to the authors, the approach can be used on any cipher with key size up to 64-bits.[14] In December 2009, the A5/1 Cracking Project attack tables for A5/1 were announced by Chris Paget and Karsten Nohl. The tables use a combination of compression techniques, includingrainbow tablesand distinguished point chains. These tables constituted only parts of the 1.7 TB completed table and had been computed during three months using 40 distributedCUDAnodes and then published overBitTorrent.[13][14][15][16]More recently the project has announced a switch to faster ATIEvergreencode, together with a change in the format of the tables andFrank A. Stevensonannounced breaks of A5/1 using the ATI generated tables.[17] Documents leaked byEdward Snowdenin 2013 state that the NSA "can process encrypted A5/1".[18] Since the degrees of the three LFSRs are relatively prime, the period of this generator is the product of the periods of the three LFSRs, which represents 2^64 bits (2 to the power of 64). One might think of using A5/1 as pseudo-random generator with a 64-bit initialization seed (key size), but it is not reliable. It loses its randomness after only 8 MB (which represents the period of the largest of the three registers).[19]
https://en.wikipedia.org/wiki/A5/1
Inmathematics, asaddle pointorminimax point[1]is apointon thesurfaceof thegraph of a functionwhere theslopes(derivatives) inorthogonaldirections are all zero (acritical point), but which is not alocal extremumof the function.[2]An example of a saddle point is when there is a critical point with a relativeminimumalong one axial direction (between peaks) and a relativemaximumalong the crossing axis. However, a saddle point need not be in this form. For example, the functionf(x,y)=x2+y3{\displaystyle f(x,y)=x^{2}+y^{3}}has a critical point at(0,0){\displaystyle (0,0)}that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in they{\displaystyle y}-direction. The name derives from the fact that the prototypical example in two dimensions is asurfacethatcurves upin one direction, andcurves downin a different direction, resembling a ridingsaddle. In terms ofcontour lines, a saddle point in two dimensions gives rise to a contour map with, in principle, a pair of lines intersecting at the point. Such intersections are rare in contour maps drawn with discrete contour lines, such as ordnance survey maps, as the height of the saddle point is unlikely to coincide with the integer multiples used in such maps. Instead, the saddle point appears as a blank space in the middle of four sets of contour lines that approach and veer away from it. For a basic saddle point, these sets occur in pairs, with an opposing high pair and an opposing low pair positioned in orthogonal directions. The critical contour lines generally do not have to intersect orthogonally. A simple criterion for checking if a givenstationary pointof a real-valued functionF(x,y) of two real variables is a saddle point is to compute the function'sHessian matrixat that point: if the Hessian isindefinite, then that point is a saddle point. For example, the Hessian matrix of the functionz=x2−y2{\displaystyle z=x^{2}-y^{2}}at the stationary point(x,y,z)=(0,0,0){\displaystyle (x,y,z)=(0,0,0)}is the matrix which is indefinite. Therefore, this point is a saddle point. This criterion gives only a sufficient condition. For example, the point(0,0,0){\displaystyle (0,0,0)}is a saddle point for the functionz=x4−y4,{\displaystyle z=x^{4}-y^{4},}but the Hessian matrix of this function at the origin is thenull matrix, which is not indefinite. In the most general terms, asaddle pointfor asmooth function(whosegraphis acurve,surfaceorhypersurface) is a stationary point such that the curve/surface/etc. in theneighborhoodof that point is not entirely on any side of thetangent spaceat that point. In a domain of one dimension, a saddle point is apointwhich is both astationary pointand apoint of inflection. Since it is a point of inflection, it is not alocal extremum. Asaddle surfaceis asmooth surfacecontaining one or more saddle points. Classical examples of two-dimensional saddle surfaces in theEuclidean spaceare second order surfaces, thehyperbolic paraboloidz=x2−y2{\displaystyle z=x^{2}-y^{2}}(which is often referred to as "thesaddle surface" or "the standard saddle surface") and thehyperboloid of one sheet. ThePringlespotato chip or crisp is an everyday example of a hyperbolic paraboloid shape. Saddle surfaces have negativeGaussian curvaturewhich distinguish them from convex/elliptical surfaces which have positive Gaussian curvature. A classical third-order saddle surface is themonkey saddle.[3] In a two-playerzero sumgame defined on a continuous space, theequilibriumpoint is a saddle point. For a second-order linear autonomous system, acritical pointis a saddle point if thecharacteristic equationhas one positive and one negative realeigenvalue.[4] In optimization subject to equality constraints, the first-order conditions describe a saddle point of theLagrangian. Indynamical systems, if the dynamic is given by adifferentiable mapfthen a point is hyperbolic if and only if the differential ofƒn(wherenis the period of the point) has no eigenvalue on the (complex)unit circlewhen computed at the point. Then asaddle pointis a hyperbolicperiodic pointwhosestableandunstable manifoldshave adimensionthat is not zero. A saddle point of a matrix is an element which is both the largest element in its column and the smallest element in its row.
https://en.wikipedia.org/wiki/Saddle_point
Dynamic network analysis(DNA) is an emergent scientific field that brings together traditionalsocial network analysis(SNA),link analysis(LA),social simulationandmulti-agent systems(MAS) withinnetwork scienceandnetwork theory. Dynamic networks are afunctionoftime(modeled as asubsetof thereal numbers) to a set ofgraphs; for each time point there is a graph. This is akin to the definition ofdynamical systems, in which the function is from time to an ambient space, where instead of ambient space time is translated to relationships between pairs ofvertices.[1] There are two aspects of this field. The first is thestatistical analysisof DNA data. The second is the utilization of simulation to address issues of network dynamics. DNA networks vary from traditional social networks in that they are larger, dynamic, multi-mode, multi-plex networks, and may contain varying levels ofuncertainty. The main difference of DNA to SNA is that DNA takes interactions of social features conditioning structure and behavior of networks into account. DNA is tied to temporal analysis but temporal analysis is not necessarily tied to DNA, as changes in networks sometimes result from external factors which are independent of social features found in networks. One of the most notable and earliest of cases in the use of DNA is in Sampson's monastery study, where he took snapshots of the same network from different intervals and observed and analyzed the evolution of the network.[2] DNA statistical tools are generally optimized for large-scale networks and admit the analysis of multiple networks simultaneously in which, there are multiple types ofnodes(multi-node) and multiple types of links (multi-plex). Multi-node multi-plex networks are generally referred to as meta-networks or high-dimensional networks. In contrast, SNA statistical tools focus on single or at most two mode data and facilitate the analysis of only one type of link at a time. DNA statistical tools tend to provide more measures to the user, because they have measures that use data drawn from multiple networks simultaneously. Latent space models (Sarkar and Moore, 2005)[3]and agent-based simulation are often used to examine dynamic social networks (Carley et al., 2009).[4]From a computer simulation perspective, nodes in DNA are like atoms in quantum theory, nodes can be, though need not be, treated as probabilistic. Whereas nodes in a traditional SNA model arestatic, nodes in a DNA model have the ability to learn. Properties change over time; nodes can adapt: A company's employees can learn new skills and increase their value to the network; or, capture one terrorist and three more are forced to improvise. Change propagates from one node to the next and so on. DNA adds the element of a network's evolution and considers the circumstances under which change is likely to occur. There are three main features to dynamic network analysis that distinguish it from standard social network analysis. First, rather than just using social networks, DNA looks at meta-networks. Second, agent-based modeling and other forms of simulations are often used to explore how networks evolve and adapt as well as the impact of interventions on those networks. Third, the links in the network are not binary; in fact, in many cases they represent the probability that there is a link. Complex information about object relationships can be effectively condensed into low-dimensional embeddings in a latent space.[5]Dynamic systems, unlike static ones, involve temporal changes. Differences in learned representations over time in a dynamic system can arise from actual changes or arbitrary alterations that do not affect the metrics in the latent space with the former reflecting on the system's stability and the latter linked to the alignment of embeddings.[6] In essence, the stability of the system defines its dynamics, while misalignment signifies irrelevant changes in the latent space. Dynamic embeddings are considered aligned when variations between embeddings at different times accurately represent the system's actual changes, not meaningless alterations in the latent space. The matter of stability and alignment of dynamic embeddings holds significant importance in various tasks reliant on temporal changes within the latent space. These tasks encompass future metadata prediction, temporal evolution, dynamic visualization, and obtaining average embeddings, among others. A meta-network is a multi-mode, multi-link, multi-level network. Multi-mode means that there are many types of nodes; e.g., nodes people and locations. Multi-link means that there are many types of links; e.g., friendship and advice. Multi-level means that some nodes may be members of other nodes, such as a network composed of people and organizations and one of the links is who is a member of which organization. While different researchers use different modes, common modes reflect who, what, when, where, why and how. A simple example of a meta-network is the PCANS formulation with people, tasks, and resources.[7]A more detailed formulation considers people, tasks, resources, knowledge, and organizations.[8]The ORA tool was developed to support meta-network analysis.[9]
https://en.wikipedia.org/wiki/Dynamic_network_analysis
TheHindu–Arabic numeral system(also known as theIndo-Arabic numeral system,[1]Hindu numeral system, andArabic numeral system)[2][note 1]is apositionalbase-tennumeral systemfor representingintegers; its extension to non-integers is thedecimal numeral system, which is presently the most common numeral system. The system was invented between the 1st and 4th centuries byIndian mathematicians. By the 9th century, the system was adopted byArabic mathematicianswho extended it to includefractions. It became more widely known through the writings inArabicof the Persian mathematicianAl-Khwārizmī[3](On the Calculation with Hindu Numerals,c.825) and Arab mathematicianAl-Kindi(On the Use of the Hindu Numerals,c.830). The system had spread to medieval Europe by theHigh Middle Ages, notably followingFibonacci's 13th centuryLiber Abaci; until the evolution of theprinting pressin the 15th century, use of the system in Europe was mainly confined toNorthern Italy.[4] It is based upon tenglyphsrepresenting the numbers from zero to nine, and allows representing anynatural numberby a unique sequence of these glyphs. The symbols (glyphs) used to represent the system are in principle independent of the system itself. The glyphs in actual use are descended fromBrahmi numeralsand have split into various typographical variants since theMiddle Ages. These symbol sets can be divided into three main families:Western Arabic numeralsused in theGreater Maghreband inEurope;Eastern Arabic numeralsused in theMiddle East; and the Indian numerals in various scripts used in theIndian subcontinent. Sometime around 600 CE, a change began in the writing of dates in theBrāhmī-derived scripts of India and Southeast Asia, transforming from an additive system with separate numerals for numbers of different magnitudes to a positional place-value system with a single set of glyphs for 1–9 and a dot for zero, gradually displacing additive expressions of numerals over the following several centuries.[5] When this system was adopted and extended by medieval Arabs and Persians, they called ital-ḥisāb al-hindī("Indian arithmetic"). These numerals were gradually adopted in Europe starting around the 10th century, probably transmitted by Arab merchants;[6]medieval and Renaissance European mathematicians generally recognized them as Indian in origin,[7]however a few influential sources credited them to the Arabs, and they eventually came to be generally known as "Arabic numerals" in Europe.[8]According to some sources, this number system may have originated in ChineseShangnumerals (1200 BCE), which was also adecimalpositionalnumeral system.[9] The Hindu–Arabic system is designed forpositional notationin adecimalsystem. In a more developed form, positional notation also uses adecimal marker(at first a mark over the ones digit but now more commonly a decimal point or a decimal comma which separates the ones place from the tenths place), and also a symbol for "these digits recurad infinitum". In modern usage, this latter symbol is usually avinculum(a horizontal line placed over the repeating digits). In this more developed form, the numeral system can symbolize anyrational numberusing only 13 symbols (the ten digits, decimal marker, vinculum, and a prependedminus signto indicate anegative number). Although generally found in text written with the Arabicabjad("alphabet"), which is written right-to-left, numbers written with these numerals place the most-significant digit to the left, so they read from left to right (though digits are not always said in order from most to least significant[10]). The requisite changes in reading direction are found in text that mixes left-to-right writing systems with right-to-left systems. Various symbol sets are used to represent numbers in the Hindu–Arabic numeral system, most of which developed from theBrahmi numerals. The symbols used to represent the system have split into various typographical variants since theMiddle Ages, arranged in three main groups: TheBrahmi numeralsat the basis of the system predate theCommon Era. They replaced the earlierKharosthi numeralsused since the 4th century BCE. Brahmi and Kharosthi numerals were used alongside one another in theMaurya Empireperiod, both appearing on the 3rd century BCEedicts of Ashoka.[11] Buddhistinscriptions from around 300 BCE use the symbols that became 1, 4, and 6. One century later, their use of the symbols that became 2, 4, 6, 7, and 9 was recorded. TheseBrahmi numeralsare the ancestors of the Hindu–Arabic glyphs 1 to 9, but they were not used as apositional systemwith azero, and there were rather[clarification needed]separate numerals for each of the tens (10, 20, 30, etc.). The actual numeral system, including positional notation and use of zero, is in principle independent of the glyphs used, and significantly younger than the Brahmi numerals. The place-value system is used in theBakhshali manuscript, the earliest leaves being radiocarbon dated to the period 224–383 CE.[12]The development of the positional decimal systemtakes its origins in[clarification needed]Indian mathematicsduring theGupta period. Around 500, the astronomerAryabhatauses the wordkha("emptiness") to mark "zero" in tabular arrangements of digits. The 7th centuryBrahmasphuta Siddhantacontains a comparatively advanced understanding of the mathematical role ofzero. The Sanskrit translation of the lost 5th century PrakritJaina cosmologicaltextLokavibhagamay preserve an early instance of the positional use of zero.[13] The first dated and undisputed inscription showing the use of a symbol for zero appears on a stone inscription found at theChaturbhuja TempleatGwaliorin India, dated 876 CE.[14] These Indian developments were taken up inIslamic mathematicsin the 8th century, as recorded inal-Qifti'sChronology of the scholars(early 13th century).[15] In 10th centuryIslamic mathematics, the system was extended to include fractions, as recorded in a treatise byAbbasid CaliphatemathematicianAbu'l-Hasan al-Uqlidisi, who was the first to describe positional decimal fractions.[16]According to J. L. Berggren, the Muslims were the first to represent numbers as we do since they were the ones who initially extended this system of numeration to represent parts of the unit by decimal fractions, something that the Hindus did not accomplish. Thus, we refer to the system as "Hindu–Arabic" rather appropriately.[17][18] The numeral system came to be known to both thePersianmathematicianKhwarizmi, who wrote a book,On the Calculation with Hindu Numeralsin about 825 CE, and theArabmathematicianAl-Kindi, who wrote a book,On the Use of the Hindu Numerals(كتاب في استعمال العداد الهندي[kitāb fī isti'māl al-'adād al-hindī]) around 830 CE.PersianscientistKushyar Gilaniwho wroteKitab fi usul hisab al-hind (Principles of Hindu Reckoning)is one of the oldest surviving manuscripts using the Hindu numerals.[19]These books are principally responsible for the diffusion of the Hindu system of numeration throughout theIslamic worldand ultimately also to Europe. In Christian Europe, the first mention and representation of Hindu–Arabic numerals (from one to nine, without zero), is in theCodex Vigilanus(akaAlbeldensis), anilluminatedcompilation of various historical documents from theVisigothicperiod inSpain, written in the year 976 CE by three monks of theRiojanmonastery ofSan Martín de Albelda. Between 967 and 969 CE,Gerbert of Aurillacdiscovered and studied Arab science in the Catalan abbeys. Later he obtained from these places the bookDe multiplicatione et divisione(On multiplication and division). After becomingPope Sylvester IIin the year 999 CE, he introduced a new model ofabacus, the so-calledAbacus of Gerbert, by adopting tokens representing Hindu–Arabic numerals, from one to nine. Leonardo Fibonaccibrought this system to Europe. His bookLiber AbaciintroducedModus Indorum(the method of the Indians), today known as Hindu–Arabic numeral system or base-10 positional notation, the use of zero, and the decimal place system to the Latin world. The numeral system came to be called "Arabic" by the Europeans. It was used in European mathematics from the 12th century, and entered common use from the 15th century to replaceRoman numerals.[20][21] The familiar shape of the Western Arabic glyphs as now used with the Latin alphabet (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) are the product of the late 15th to early 16th century, when they entered earlytypesetting. Muslim scientists used theBabylonian numeral system, and merchants used theAbjad numerals, a system similar to theGreek numeral systemand theHebrew numeral system. Similarly, Fibonacci's introduction of the system to Europe was restricted to learned circles. The credit for first establishing widespread understanding and usage of the decimal positional notation among the general population goes toAdam Ries, an author of theGerman Renaissance, whose 1522Rechenung auff der linihen und federn(Calculating on the Lines and with a Quill) was targeted at the apprentices of businessmen and craftsmen. The '〇' is used to write zero inSuzhou numerals, which is the only surviving variation of therod numeralsystem. TheMathematical Treatise in Nine Sections, written byQin Jiushaoin 1247, is the oldest surviving Chinese mathematical text to use the character ‘〇’ for zero.[22] The origin of using the character '〇' to represent zero is unknown.Gautama Siddhaintroduced Hindu numerals with zero in 718 CE, butChinese mathematiciansdid not find them useful, as they already had the decimal positionalcounting rods.[23][24]Some historians suggest that the use of '〇' for zero was influenced by Indian numerals imported by Gautama,[24]but Gautama’s numeral system represented zero with a dot rather than a hollow circle, similar to theBakhshali manuscript.[25] An alternative hypothesis proposes that the use of '〇' to represent zero arose from a modification of the Chinese text space filler "□", making its resemblance to Indian numeral systems purely coincidental. Others think that the Indians acquired the symbol '〇' from China, because it resembles aConfucianphilosophical symbol for "nothing".[23] Chinese andJapanesefinally adopted the Hindu–Arabic numerals in the 19th century, abandoning counting rods. The "Western Arabic" numerals as they were in common use in Europe since theBaroqueperiod have secondarily found worldwide use together with theLatin alphabet, and even significantly beyond the contemporaryspread of the Latin alphabet, intruding into the writing systems in regions where other variants of the Hindu–Arabic numerals had been in use, but also in conjunction withChineseandJapanesewriting (seeChinese numerals,Japanese numerals).
https://en.wikipedia.org/wiki/Hindu%E2%80%93Arabic_numeral_system
"Turtles all the way down" is an expression of the problem ofinfinite regress. The saying alludes to the mythological idea of aWorld Turtlethat supports aflat Earthon its back. It suggests that this turtle rests on the back of an even larger turtle, which itself is part of a column of increasingly larger turtles that continues indefinitely. The exact origin of the phrase is uncertain. In the form "rocks all the way down", the saying appears as early as 1838.[1]References to the saying's mythological antecedents, the World Turtle and its counterpart the World Elephant, were made by a number of authors in the 17th and 18th centuries.[2][3] The expression has been used to illustrate problems such as theregress argumentinepistemology. Early variants of the saying do not always have explicit references to infinite regression (i.e., the phrase "all the way down"). They often reference stories featuring aWorld Elephant,World Turtle, or other similar creatures that are claimed to come fromHindu mythology. The first known reference to a Hindu source is found in a letter byJesuitEmanuel da Veiga (1549–1605), written at Chandagiri on 18 September 1599, in which the relevant passage reads: Alii dicebant terram novem constare angulis, quibus cœlo innititur. Alius ab his dissentiens volebat terram septem elephantis fulciri, elephantes uero ne subsiderent, super testudine pedes fixos habere. Quærenti quis testudinis corpus firmaret, ne dilaberetur, respondere nesciuit. Others hold that the earth has nine corners by which the heavens are supported. Another disagreeing from these would have the earth supported by seven elephants, and the elephants do not sink down because their feet are fixed on a tortoise. When asked who would fix the body of the tortoise, so that it would not collapse, he said that he did not know.[4] Veiga's account seems to have been received bySamuel Purchas, who has a close paraphrase in hisPurchas His Pilgrims(1613/1626), "that the Earth had nine corners, whereby it was borne up by the Heaven. Others dissented, and said, that the Earth was borne up by seven Elephants; the Elephants' feet stood on Tortoises, and they were borne by they know not what."[5]Purchas' account is again reflected byJohn Lockein his 1689 tractAn Essay Concerning Human Understanding, where Locke introduces the story as a trope referring to the problem of induction in philosophical debate. Locke compares one who would say that properties inhere in "Substance" to the Indian who said the world was on an elephant which was on a tortoise, "But being again pressed to know what gave support to the broad-back'd Tortoise, replied, something, he knew not what".[2]The story is also referenced byHenry David Thoreau, who writes in his journal entry of 4 May 1852: "Men are making speeches ... all over the country, but each expresses only the thought, or the want of thought, of the multitude. No man stands on truth. They are merely banded together as usual, one leaning on another and all together on nothing; as the Hindoos made the world rest on an elephant, and the elephant on a tortoise, and had nothing to put under the tortoise."[6] In the form of "rocks all the way down", the saying dates to at least 1838, when it was printed in an unsigned anecdote in theNew-York Mirrorabout a schoolboy and an old woman living in the woods: "The world, marm," said I, anxious to display my acquired knowledge, "is not exactly round, but resembles in shape a flattened orange; and it turns on its axis once in twenty-four hours." "Well, I don't know anything about itsaxes," replied she, "but I know it don't turn round, for if it did we'd be all tumbled off; and as to its being round, any one can see it's a square piece of ground, standing on a rock!" "Standing on a rock! but upon what does that stand?" "Why, on another, to be sure!" "But what supports the last?" "Lud! child, how stupid you are! There's rocks all the way down!"[1] Another version of the saying appeared in an 1854 transcript of remarks by preacher Joseph Frederick Berg addressed toJoseph Barker: My opponent's reasoning reminds me of the heathen, who, being asked on what the world stood, replied, "On a tortoise." But on what does the tortoise stand? "On another tortoise." With Mr. Barker, too, there are tortoises all the way down. (Vehement and vociferous applause.) Many 20th-century attributions claim that philosopher and psychologistWilliam Jamesis the source of the phrase.[8]James referred to the fable of the elephant and tortoise several times, but told the infinite regress story with "rocks all the way down" in his 1882 essay, "Rationality, Activity and Faith": Like the old woman in the story who described the world as resting on a rock, and then explained that rock to be supported by another rock, and finally when pushed with questions said it was "rocks all the way down," he who believes this to be a radically moral universe must hold the moral order to rest either on an absolute and ultimateshouldor on a series ofshoulds"all the way down."[9] The linguistJohn R. Rossalso associates James with the phrase: The following anecdote is told of William James. [...] After a lecture on cosmology and the structure of the solar system, James was accosted by a little old lady. "Your theory that the sun is the centre of the solar system, and the earth is a ball which rotates around it has a very convincing ring to it, Mr. James, but it's wrong. I've got a better theory," said the little old lady. "And what is that, madam?" inquired James politely. "That we live on a crust of earth which is on the back of a giant turtle." Not wishing to demolish this absurd little theory by bringing to bear the masses of scientific evidence he had at his command, James decided to gently dissuade his opponent by making her see some of the inadequacies of her position. "If your theory is correct, madam," he asked, "what does this turtle stand on?" "You're a very clever man, Mr. James, and that's a very good question," replied the little old lady, "but I have an answer to it. And it's this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him." "But what does this second turtle stand on?" persisted James patiently. To this, the little old lady crowed triumphantly, "It's no use, Mr. James—it's turtles all the way down." The mythological idea of aturtle worldis often used as an illustration ofinfinite regresses. Aninfinite regressis an infinite series of entities governed by arecursiveprinciple that determines how each entity in the series depends on or is produced by its predecessor.[11]The main interest ininfinite regressesis due to their role ininfinite regress arguments. Aninfinite regress argumentis an argument against a theory based on the fact that this theory leads to an infinite regress.[11][12]For such an argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress isvicious.[11][13]There are different ways in which a regress can be vicious.[13][14]The idea of aturtle worldexemplifies viciousness due toexplanatory failure: it does not solve the problem it was formulated to solve. Instead, it assumes already in disguised form what it was supposed to explain.[13][14]This is akin to theinformal fallacyofbegging the question.[15]In one interpretation, the goal of positing the existence of aworld turtleis to explain why the earth seems to be at rest instead of falling down: because it rests on the back of a giant turtle. In order to explain why the turtle itself is not in free fall, another, even bigger turtle is posited, and so on, resulting in a world that isturtles all the way down.[13][11]Despite its shortcomings in clashing with modern physics, and due to its ontological extravagance, this theory seems to be metaphysically possible, assuming that space is infinite, thereby avoiding an outrightcontradiction. But it fails because it has to assume rather than explain at each step that there is another thing that is not falling. It does not explain why nothing at all is falling.[11][13] Themetaphoris used as an example of the problem of infinite regress in epistemology to show that there is a necessary foundation to knowledge, as written byJohann Gottlieb Fichtein 1794:[16][page needed] If there is not to be any (system of human knowledge dependent upon an absolute first principle) two cases are only possible. Either there is no immediate certainty at all, and then our knowledge forms many series or one infinite series, wherein each theorem is derived from a higher one, and this again from a higher one, etc., etc. We build our houses on the earth, the earth rests on an elephant, the elephant on a tortoise, the tortoise again—who knows on what?—and so on ad infinitum. True, if our knowledge is thus constituted, we can not alter it; but neither have we, then, any firm knowledge. We may have gone back to a certain link of our series, and have found every thing firm up to this link; but who can guarantee us that, if we go further back, we may not find it ungrounded, and shall thus have to abandon it? Our certainty is only assumed, and we can never be sure of it for a single following day. David Humereferences the story in his 1779 workDialogues Concerning Natural Religionwhen arguing against God as an unmoved mover:[3] How, therefore, shall we satisfy ourselves concerning the cause of that Being whom you suppose the Author of Nature, or, according to your system of Anthropomorphism, the ideal world, into which you trace the material? Have we not the same reason to trace that ideal world into another ideal world, or new intelligent principle? But if we stop, and go no further; why go so far? why not stop at the material world? How can we satisfy ourselves without going on in infinitum? And, after all, what satisfaction is there in that infinite progression? Let us remember the story of the Indian philosopher and his elephant. It was never more applicable than to the present subject. If the material world rests upon a similar ideal world, this ideal world must rest upon some other; and so on, without end. It were better, therefore, never to look beyond the present material world. By supposing it to contain the principle of its order within itself, we really assert it to be God; and the sooner we arrive at that Divine Being, so much the better. When you go one step beyond the mundane system, you only excite an inquisitive humour which it is impossible ever to satisfy. Bertrand Russellalso mentions the story in his 1927 lectureWhy I Am Not a Christianwhile discounting theFirst Causeargument intended to be a proof of God's existence: If everything must have a cause, then God must have a cause. If there can be anything without a cause, it may just as well be the world as God, so that there cannot be any validity in that argument. It is exactly of the same nature as the Hindu's view, that the world rested upon an elephant and the elephant rested upon a tortoise; and when they said, 'How about the tortoise?' the Indian said, 'Suppose we change the subject.' References to "turtles all the way down" have been made in a variety of modern contexts. For example, American hardcore bandEvery Time I Dietitled a song “Turtles All the Way Down” on their 2009 albumNew Junk Aesthetic. The lyrics mention the turtle world theory. "Turtles All the Way Down" is the name of a song by country artistSturgill Simpsonthat appears on his 2014 albumMetamodern Sounds in Country Music.[17]"Gamma Goblins ('Its Turtles All The Way Down' Mix)" is a remix byOttfor the 2002HallucinogenalbumIn Dub.[18]Turtles All the Way Downis also the title of a 2017 novel byJohn Greenabout a teenage girl withobsessive–compulsive disorder.[19] MusicianCaptain Beefheartused the phrase in 1975 to describe playing withFrank ZappaandThe Mothers of Invention(captured on the albumBongo Fury) when he told Steve Weitzman ofRolling Stonethat he "had an extreme amount of fun on this tour. They move awfully fast. I've never travelled this fast with the Magic Band—turtles all the way down."[20] Stephen Hawkingincorporates the saying into the beginning of his 1988 bookA Brief History of Time:[21] A well-known scientist (some say it wasBertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: "What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise." The scientist gave a superior smile before replying, "What is the tortoise standing on?" "You're very clever, young man, very clever," said the old lady. "But it's turtles all the way down!" FormerU.S. Supreme CourtJusticeAntonin Scaliadiscussed his "favored version" of the saying in a footnote to his 2006 plurality opinion inRapanos v. United States:[22] In our favored version, an Eastern guru affirms that the earth is supported on the back of a tiger. When asked what supports the tiger, he says it stands upon an elephant; and when asked what supports the elephant he says it is a giant turtle. When asked, finally, what supports the giant turtle, he is briefly taken aback, but quickly replies "Ah, after that it is turtles all the way down." Microsoft Visual Studiohad agamificationplug-in that awarded badges for certain programming behaviors and patterns. One of the badges was "Turtles All the Way Down", which was awarded for writing aclasswith 10 or more levels ofinheritance.[23] In aTED-Edvideo discussingGödel's incompleteness theorems, the phrase "Gödels all the way down" is used to describe the way in which one can never get rid of unprovable true statements in an axiomatic system.[24]
https://en.wikipedia.org/wiki/Turtles_all_the_way_down
TheLinde–Buzo–Gray algorithm(named after its creators Yoseph Linde, Andrés Buzo andRobert M. Gray, who designed it in 1980)[1]is aniterativevector quantizationalgorithm to improve a small set of vectors (codebook) to represent a larger set of vectors (training set), such that it will belocally optimal. It combinesLloyd's Algorithmwith a splitting technique in which larger codebooks are built from smaller codebooks by splitting each code vector in two. The core idea of the algorithm is that by splitting the codebook such that all code vectors from the previous codebook are present, the new codebook must be as good as the previous one or better.[2]: 361–362 The Linde–Buzo–Gray algorithm may be implemented as follows:
https://en.wikipedia.org/wiki/Linde%E2%80%93Buzo%E2%80%93Gray_algorithm
The following table comparescognitive architectures.
https://en.wikipedia.org/wiki/Comparison_of_cognitive_architectures
Inprobability theoryandstatistics, thecharacteristic functionof anyreal-valuedrandom variablecompletely defines itsprobability distribution. If a random variable admits aprobability density function, then the characteristic function is theFourier transform(with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly withprobability density functionsorcumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition tounivariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike themoment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function. The characteristic function is a way to describe arandom variableX. Thecharacteristic function, a function oft, determines the behavior and properties of the probability distribution ofX. It is equivalent to aprobability density functionorcumulative distribution function, since knowing one of these functions allows computation of the others, but they provide different insights into the features of the random variable. In particular cases, one or another of these equivalent functions may be easier to represent in terms of simple standard functions. If a random variable admits adensity function, then the characteristic function is itsFourier dual, in the sense that each of them is aFourier transformof the other. If a random variable has amoment-generating functionMX(t){\displaystyle M_{X}(t)}, then the domain of the characteristic function can be extended to the complex plane, and Note however that the characteristic function of a distribution is well defined for allreal valuesoft, even when themoment-generating functionis not well defined for all real values oft. The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of theCentral Limit Theoremuses characteristic functions andLévy's continuity theorem. Another important application is to the theory of thedecomposabilityof random variables. For a scalar random variableXthecharacteristic functionis defined as theexpected valueofeitX, whereiis theimaginary unit, andt∈Ris the argument of the characteristic function: HereFXis thecumulative distribution functionofX,fXis the correspondingprobability density function,QX(p)is the corresponding inverse cumulative distribution function also called thequantile function,[2]and the integrals are of theRiemann–Stieltjeskind. If a random variableXhas aprobability density functionthen the characteristic function is itsFourier transformwith sign reversal in the complex exponential.[3][4]This convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform.[5]For example, some authors[6]defineφX(t) = E[e−2πitX], which is essentially a change of parameter. Other notation may be encountered in the literature:p^{\displaystyle \scriptstyle {\hat {p}}}as the characteristic function for a probability measurep, orf^{\displaystyle \scriptstyle {\hat {f}}}as the characteristic function corresponding to a densityf. The notion of characteristic functions generalizes to multivariate random variables and more complicatedrandom elements. The argument of the characteristic function will always belong to thecontinuous dualof the space where the random variableXtakes its values. For common cases such definitions are listed below: Oberhettinger (1973) provides extensive tables of characteristic functions. The bijection stated above between probability distributions and characteristic functions issequentially continuous. That is, whenever a sequence of distribution functionsFj(x)converges (weakly) to some distributionF(x), the corresponding sequence of characteristic functionsφj(t)will also converge, and the limitφ(t)will correspond to the characteristic function of lawF. More formally, this is stated as This theorem can be used to prove thelaw of large numbersand thecentral limit theorem. There is aone-to-one correspondencebetween cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to computeφwhen we know the distribution functionF(or densityf). If, on the other hand, we know the characteristic functionφand want to find the corresponding distribution function, then one of the followinginversion theoremscan be used. Theorem. If the characteristic functionφXof a random variableXisintegrable, thenFXis absolutely continuous, and thereforeXhas aprobability density function. In the univariate case (i.e. whenXis scalar-valued) the density function is given byfX(x)=FX′(x)=12π∫Re−itxφX(t)dt.{\displaystyle f_{X}(x)=F_{X}'(x)={\frac {1}{2\pi }}\int _{\mathbf {R} }e^{-itx}\varphi _{X}(t)\,dt.} In the multivariate case it isfX(x)=1(2π)n∫Rne−i(t⋅x)φX(t)λ(dt){\displaystyle f_{X}(x)={\frac {1}{(2\pi )^{n}}}\int _{\mathbf {R} ^{n}}e^{-i(t\cdot x)}\varphi _{X}(t)\lambda (dt)} wheret⋅x{\textstyle t\cdot x}is thedot product. The density function is theRadon–Nikodym derivativeof the distributionμXwith respect to theLebesgue measureλ:fX(x)=dμXdλ(x).{\displaystyle f_{X}(x)={\frac {d\mu _{X}}{d\lambda }}(x).} Theorem (Lévy).[note 1]IfφXis characteristic function of distribution functionFX, two pointsa<bare such that{x|a<x<b}is acontinuity setofμX(in the univariate case this condition is equivalent to continuity ofFXat pointsaandb), then Theorem. Ifais (possibly) an atom ofX(in the univariate case this means a point of discontinuity ofFX) then Theorem (Gil-Pelaez).[16]For a univariate random variableX, ifxis acontinuity pointofFXthen where the imaginary part of a complex numberz{\displaystyle z}is given byIm(z)=(z−z∗)/2i{\displaystyle \mathrm {Im} (z)=(z-z^{*})/2i}. And its density function is: The integral may be notLebesgue-integrable; for example, whenXis thediscrete random variablethat is always 0, it becomes theDirichlet integral. Inversion formulas for multivariate distributions are available.[14][17] The set of all characteristic functions is closed under certain operations: It is well known that any non-decreasingcàdlàgfunctionFwith limitsF(−∞) = 0,F(+∞) = 1corresponds to acumulative distribution functionof some random variable. There is also interest in finding similar simple criteria for when a given functionφcould be the characteristic function of some random variable. The central result here isBochner’s theorem, although its usefulness is limited because the main condition of the theorem,non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult.Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.[18] Bochner’s theorem. An arbitrary functionφ:Rn→Cis the characteristic function of some random variable if and only ifφispositive definite, continuous at the origin, and ifφ(0) = 1. Khinchine’s criterion. A complex-valued, absolutely continuous functionφ, withφ(0) = 1, is a characteristic function if and only if it admits the representation Mathias’ theorem. A real-valued, even, continuous, absolutely integrable functionφ, withφ(0) = 1, is a characteristic function if and only if forn= 0,1,2,..., and allp> 0. HereH2ndenotes theHermite polynomialof degree2n. Pólya’s theorem. Ifφ{\displaystyle \varphi }is a real-valued, even, continuous function which satisfies the conditions thenφ(t)is the characteristic function of an absolutely continuous distribution symmetric about 0. Because of thecontinuity theorem, characteristic functions are used in the most frequently seen proof of thecentral limit theorem. The main technique involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution. Characteristic functions are particularly useful for dealing with linear functions ofindependentrandom variables. For example, ifX1,X2, ...,Xnis a sequence of independent (and not necessarily identically distributed) random variables, and where theaiare constants, then the characteristic function forSnis given by In particular,φX+Y(t) =φX(t)φY(t). To see this, write out the definition of characteristic function: The independence ofXandYis required to establish the equality of the third and fourth expressions. Another special case of interest for identically distributed random variables is whenai= 1 /nand thenSnis the sample mean. In this case, writingXfor the mean, Characteristic functions can also be used to findmomentsof a random variable. Provided that then-thmoment exists, the characteristic function can be differentiatedntimes: E⁡[Xn]=i−n[dndtnφX(t)]t=0=i−nφX(n)(0),{\displaystyle \operatorname {E} \left[X^{n}\right]=i^{-n}\left[{\frac {d^{n}}{dt^{n}}}\varphi _{X}(t)\right]_{t=0}=i^{-n}\varphi _{X}^{(n)}(0),\!} This can be formally written using the derivatives of theDirac delta function:fX(x)=∑n=0∞(−1)nn!δ(n)(x)E⁡[Xn]{\displaystyle f_{X}(x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\delta ^{(n)}(x)\operatorname {E} [X^{n}]}which allows a formal solution to themoment problem. For example, supposeXhas a standardCauchy distribution. ThenφX(t) =e−|t|. This is notdifferentiableatt= 0, showing that the Cauchy distribution has noexpectation. Also, the characteristic function of the sample meanXofnindependentobservations has characteristic functionφX(t) = (e−|t|/n)n=e−|t|, using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself. As a further example, supposeXfollows aGaussian distributioni.e.X∼N(μ,σ2){\displaystyle X\sim {\mathcal {N}}(\mu ,\sigma ^{2})}. ThenφX(t)=eμit−12σ2t2{\displaystyle \varphi _{X}(t)=e^{\mu it-{\frac {1}{2}}\sigma ^{2}t^{2}}}and A similar calculation showsE⁡[X2]=μ2+σ2{\displaystyle \operatorname {E} \left[X^{2}\right]=\mu ^{2}+\sigma ^{2}}and is easier to carry out than applying the definition of expectation and using integration by parts to evaluateE⁡[X2]{\displaystyle \operatorname {E} \left[X^{2}\right]}. The logarithm of a characteristic function is acumulant generating function, which is useful for findingcumulants; some instead define the cumulant generating function as the logarithm of themoment-generating function, and call the logarithm of the characteristic function thesecondcumulant generating function. Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting thestable distributionsince closed form expressions for the density are not available which makes implementation ofmaximum likelihoodestimation difficult. Estimation procedures are available which match the theoretical characteristic function to theempirical characteristic function, calculated from the data. Paulson et al. (1975)[19]and Heathcote (1977)[20]provide some theoretical background for such an estimation procedure. In addition, Yu (2004)[21]describes applications of empirical characteristic functions to fittime seriesmodels where likelihood procedures are impractical. Empirical characteristic functions have also been used by Ansari et al. (2020)[22]and Li et al. (2020)[23]for traininggenerative adversarial networks. Thegamma distributionwith scale parameter θ and a shape parameterkhas the characteristic function Now suppose that we have withXandYindependent from each other, and we wish to know what the distribution ofX+Yis. The characteristic functions are which by independence and the basic properties of characteristic function leads to This is the characteristic function of the gamma distribution scale parameterθand shape parameterk1+k2, and we therefore conclude The result can be expanded tonindependent gamma distributed random variables with the same scale parameter and we get As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane byanalytic continuation, in cases where this is possible.[24] Related concepts include themoment-generating functionand theprobability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function. The characteristic function is closely related to theFourier transform: the characteristic function of a probability density functionp(x)is thecomplex conjugateof thecontinuous Fourier transformofp(x)(according to the usual convention; seecontinuous Fourier transform – other conventions). whereP(t)denotes thecontinuous Fourier transformof the probability density functionp(x). Likewise,p(x)may be recovered fromφX(t)through the inverse Fourier transform: Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable. Another related concept is the representation of probability distributions as elements of areproducing kernel Hilbert spacevia thekernel embedding of distributions. This framework may be viewed as a generalization of the characteristic function under specific choices of thekernel function.
https://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)
Language revitalization, also referred to aslanguage revivalorreversing language shift, is an attempt to halt or reverse the decline of a language or to revive an extinct one.[1][2]Those involved can include linguists, cultural or community groups, or governments. Some argue for a distinction betweenlanguage revival(the resurrection of anextinct languagewith no existing native speakers) andlanguage revitalization(the rescue of a "dying" language). There has only been one successful instance of a complete language revival:that of the Hebrew language.[3] Languages targeted for language revitalization include those whose use and prominence isseverely limited. Sometimes various tactics of language revitalization can even be used to try to reviveextinct languages. Though the goals of language revitalization vary greatly from case to case, they typically involve attempting to expand the number of speakers and use of a language, or trying to maintain the current level of use to protect the language from extinction orlanguage death. Reasons for revitalization vary: they can include physical danger affecting those whose language is dying, economic danger such as the exploitation of indigenous natural resources, political danger such as genocide, or cultural danger/assimilation.[4]In recent times[when?]alone, it is estimated that more than 2000 languages have already become extinct. The UN estimates that more than half of the languages spoken today have fewer than 10,000 speakers and that a quarter have fewer than 1,000 speakers; and that, unless there are some efforts to maintain them, over the next hundred years most of these will become extinct.[5]These figures are often cited as reasons why language revitalization is necessary to preserve linguistic diversity. Culture and identity are also frequently cited reasons for language revitalization, when a language is perceived as a unique "cultural treasure."[6]A community often sees language as a unique part of its culture, connecting it with its ancestors or with the land, making up an essential part of its history and self-image.[7] Language revitalization is also closely tied to the linguistic field oflanguage documentation. In this field, linguists try to create a complete record of a language's grammar, vocabulary, and linguistic features. This practice can often lead to more concern for the revitalization of a specific language on study. Furthermore, the task of documentation is often taken on with the goal of revitalization in mind.[8] Uses a six-point scale is as follows:[9] Another scale for identifying degrees of language endangerment is used in a 2003 paper ("Language Vitality and Endangerment") commissioned byUNESCOfrom an international group of linguists. The linguists, among other goals and priorities, create a scale with six degrees for language vitality and endangerment.[10]They also propose nine factors or criteria (six of which use the six-degree scale) to "characterize a language’s overall sociolinguistic situation".[10]The nine factors with their respective scales are: One of the most important preliminary steps in language revitalization/recovering involves establishing the degree to which a particular language has been “dislocated.” This helps involved parties find the best way to assist or revive the language.[11] There are many different theories or models that attempt to lay out a plan for language revitalization. One of these is provided by celebrated linguistJoshua Fishman. Fishman's model for reviving threatened (or sleeping) languages, or for making them sustainable,[12][13]consists of an eight-stage process. Efforts should be concentrated on the earlier stages of restoration until they have been consolidated before proceeding to the later stages. The eight stages are: This model of language revival is intended to direct efforts to where they are most effective and to avoid wasting energy trying to achieve the later stages of recovery when the earlier stages have not been achieved. For instance, it is probably wasteful to campaign for the use of a language on television or in government services if hardly any families are in the habit of using the language. Additionally, Tasaku Tsunoda describes a range of different techniques or methods that speakers can use to try to revitalize a language, including techniques to revive extinct languages and maintain weak ones. The techniques he lists are often limited to the current vitality of the language. He claims that theimmersionmethod cannot be used to revitalize an extinct or moribund language. In contrast, the master-apprentice method of one-on-one transmission on language proficiency can be used with moribund languages. Several other methods of revitalization, including those that rely on technology such as recordings or media, can be used for languages in any state of viability.[14] David Crystal, in his bookLanguage Death, proposes that language revitalization is more likely to be successful if its speakers: In her book,Endangered Languages: An Introduction,Sarah Thomasonnotes the success of revival efforts for modern Hebrew and the relative success of revitalizing Maori in New Zealand (seeSpecific Examplesbelow). One notable factor these two examples share is that the children were raised in fully immersive environments.[16]In the case of Hebrew, it was on early collective-communities calledkibbutzim.[17]For the Maori language In New Zealand, this was done through alanguage nest.[18] Ghil'ad Zuckermannproposes "Revival Linguistics" as a new linguistic discipline and paradigm. Zuckermann's term 'Revival Linguistics' is modelled upon 'Contact Linguistics'. Revival linguistics inter alia explores the universal constraints and mechanisms involved in language reclamation, renewal and revitalization. It draws perspicacious comparative insights from one revival attempt to another, thus acting as an epistemological bridge between parallel discourses in various local attempts to revive sleeping tongues all over the globe.[19] According to Zuckermann, "revival linguistics combines scientific studies of native language acquisition and foreign language learning. After all, language reclamation is the most extreme case of second-language learning. Revival linguistics complements the established area ofdocumentary linguistics, which records endangered languages before they fall asleep."[20] Zuckermann proposes that "revival linguistics changes the field of historical linguistics by, for instance, weakening the familytree model, which implies that a language has only one parent."[20] There are disagreements in the field of language revitalization as to the degree that revival should concentrate on maintaining the traditional language, versus allowing simplification or widespread borrowing from themajority language. Zuckermann acknowledges the presence of "local peculiarities and idiosyncrasies"[20]but suggests that "there are linguistic constraints applicable to all revival attempts. Mastering them would help revivalists and first nations' leaders to work more efficiently. For example, it is easier to resurrect basic vocabulary and verbal conjugations than sounds and word order. Revivalists should be realistic and abandon discouraging, counter-productive slogans such as "Give us authenticity or give us death!"[20] Nancy Dorianhas pointed out that conservative attitudes towardloanwordsand grammatical changes often hamper efforts to revitalize endangered languages (as withTiwiin Australia), and that a division can exist between educated revitalizers, interested in historicity, and remaining speakers interested in locally authentic idiom (as has sometimes occurred withIrish). Some have argued that structural compromise may, in fact, enhance the prospects of survival, as may have been the case with English in the post-Norman period.[21] Other linguists have argued that when language revitalization borrows heavily from the majority language, the result is a new language, perhaps acreoleorpidgin.[22]For example, the existence of "Neo-Hawaiian" as a separate language from "Traditional Hawaiian" has been proposed, due to the heavy influence of English on every aspect of the revived Hawaiian language.[23]This has also been proposed for Irish, with a sharp division between "Urban Irish" (spoken by second-language speakers) and traditional Irish (as spoken as a first language inGaeltachtareas). Ó Béarra stated: "[to] follow the syntax and idiomatic conventions of English, [would be] producing what amounts to little more than English in Irish drag."[24]With regard to the then-moribundManx language, the scholar T. F. O'Rahilly stated, "When a language surrenders itself to foreign idiom, and when all its speakers become bilingual, the penalty is death."[25]Neil McRae has stated that the uses ofScottish Gaelicare becoming increasingly tokenistic, and native Gaelic idiom is being lost in favor of artificial terms created by second-language speakers.[26] The total revival of adead language(in the sense of having nonative speakers) to become the shared means of communication of a self-sustaining community of several millionfirst languagespeakers has happened only once, in the case ofHebrew, resulting inModern Hebrew– now thenational languageofIsrael. In this case, there was a unique set of historical and cultural characteristics that facilitated the revival. (SeeRevival of the Hebrew language.) Hebrew, once largely aliturgical language, was re-established as a means of everyday communication by Jews, some of whom had lived in what is now the State of Israel, starting in the nineteenth century. It is the world's most famous and successful example of language revitalization. In a related development,literary languageswithoutnative speakersenjoyed great prestige and practical utility aslingua francas, often counting millions of fluent speakers at a time. In many such cases, a decline in the use of the literary language, sometimes precipitous, was later accompanied by a strong renewal. This happened, for example, in the revival ofClassical Latinin theRenaissance, and the revival ofSanskritin the early centuries AD. An analogous phenomenon in contemporaryArabic-speaking areas is the expanded use of the literary language (Modern Standard Arabic, a form of theClassical Arabicof the 6th century AD). This is taught to all educated speakers and is used in radio broadcasts, formal discussions, etc.[27] In addition, literary languages have sometimes risen to the level of becomingfirst languagesof very large language communities. An example is standardItalian, which originated as a literary language based on the language of 13th-centuryFlorence, especially as used by such important Florentine writers asDante,PetrarchandBoccaccio. This language existed for several centuries primarily as a literary vehicle, with few native speakers; even as late as 1861, on the eve ofItalian unification, the language only counted about 500,000 speakers (many non-native), out of a total population ofc.22,000,000. The subsequent success of the language has been through conscious development, where speakers of any of the numerousItalian languageswere taught standard Italian as asecond languageand subsequently imparted it to their children, who learned it as a first language.[citation needed]Of course this came at the expense of local Italian languages, most of which are nowendangered. Success was enjoyed in similar circumstances byHigh German,standard Czech,Castilian Spanishand other languages. TheCoptic languagebegan its decline when Arabic became the predominant language in Egypt.Pope Shenouda IIIestablished the Coptic Language Institute in December 1976 in Saint Mark's Coptic Orthodox Cathedral inCairofor the purpose of reviving the Coptic language.[28][29] In recent years, a growing number ofNative Americantribes have been trying to revitalize their languages.[30][31]For example, there are apps (including phrases, word lists and dictionaries) in many Native languages includingCree,Cherokee,Chickasaw,Lakota,Ojibwe,Oneida,Massachusett,Navajo,Halq'emeylem,Gwych'in, andLushootseed. Wampanoag, a language spoken by the people of the same name in Massachusetts, underwent a language revival project led byJessie Little Doe Baird, a trained linguist. Members of the tribe use the extensive written records that exist in their language, including a translation of the Bible and legal documents, in order to learn and teach Wampanoag. The project has seen children speaking the language fluently for the first time in over 100 years.[32][33]In addition, there are currently attempts at reviving theChochenyo languageof California, which had become extinct. Efforts are being made by the Confederated Tribes of the Grand Ronde Community and others to keepChinook Jargon, also known asChinuk Wawa, alive. This is helped by the corpus of songs and stories collected fromVictoria Howardand published byMelville Jacobs.[34][35] The open-source platformFirstVoiceshosts community-managed websites for 85 language revitalization projects, covering multiple varieties of 33 Indigenous languages inBritish Columbiaas well as over a dozen languages from "elsewhere in Canada and around the globe", along with 17 dictionary apps.[36] Similar to other indigenous languages,Tlingitis critically endangered.[37]Fewer than 100 fluent Elders existed as of 2017.[37]From 2013 to 2014, the language activist, author, and teacher, Sʔímlaʔxw Michele K. Johnson from the Syilx Nation, attempted to teach two hopeful learners of Tlingit in the Yukon.[37]Her methods included textbook creation, sequenced immersion curriculum, and film assessment.[37]The aim was to assist in the creation of adult speakers that are of parent-age, so that they too can begin teaching the language. In 2020, X̱ʼuneiLance Twitchellled a Tlingit online class withOuter Coast College. Dozens of students participated.[38]He is an associate professor of Alaska Native Languages in the School of Arts and Sciences at theUniversity of Alaska Southeastwhich offers a minor in Tlingit language and an emphasis on Alaska Native Languages and Studies within a Bachelorʼs degree in Liberal Arts.[39] Kichwais the variety of theQuechualanguage spoken inEcuadorand is one of the most widely spoken indigenous languages in South America. Despite this fact, Kichwa is a threatened language, mainly because of the expansion of Spanish in South America. One community of original Kichwa speakers, Lagunas, was one of the first indigenous communities to switch to the Spanish language.[40]According to King, this was because of the increase of trade and business with the large Spanish-speaking town nearby. The Lagunas people assert that it was not for cultural assimilation purposes, as they value their cultural identity highly.[40]However, once this contact was made, language for the Lagunas people shifted through generations, to Kichwa and Spanish bilingualism and now is essentially Spanish monolingualism. The feelings of the Lagunas people present a dichotomy with language use, as most of the Lagunas members speak Spanish exclusively and only know a few words in Kichwa. The prospects for Kichwa language revitalization are not promising, as parents depend on schooling for this purpose, which is not nearly as effective as continual language exposure in the home.[41]Schooling in the Lagunas community, although having a conscious focus on teaching Kichwa, consists of mainly passive interaction, reading, and writing in Kichwa.[42]In addition to grassroots efforts, national language revitalization organizations, likeCONAIE, focus attention on non-Spanish speaking indigenous children, who represent a large minority in the country. Another national initiative, Bilingual Intercultural Education Project (PEBI), was ineffective in language revitalization because instruction was given in Kichwa and Spanish was taught as a second language to children who were almost exclusively Spanish monolinguals. Although some techniques seem ineffective, Kendall A. King provides several suggestions: Specific suggestions include imparting an elevated perception of the language in schools, focusing on grassroots efforts both in school and the home, and maintaining national and regional attention.[41] Therevival of the Hebrew languageis the only successful example of a revived dead language.[3]TheHebrew languagesurvived into the medieval period as the language ofJewish liturgyandrabbinic literature. With the rise ofZionismin the 19th century, it was revived as a spoken and literary language, becoming primarily a spokenlingua francaamong the early Jewish immigrants toOttoman Palestineand received the official status in the 1922 constitution of theBritish Mandate for Palestineand subsequently of theState of Israel.[43] There have been recent attempts at revivingSanskritin India.[44][45][46]However, despite these attempts, there are no first language speakers of Sanskrit in India.[47][48][49]In each of India's recent decennial censuses, several thousand citizens[a]have reported Sanskrit to be their mother tongue. However, these reports are thought to signify a wish to be aligned with the prestige of the language, rather than being genuinely indicative of the presence of thousands of L1 Sanskrit speakers in India. There has also been a rise of so-called "Sanskrit villages",[46][50]but experts have cast doubt on the extent to which Sanskrit is really spoken in such villages.[47][51] TheSoyot languageof the small-numberedSoyotsinBuryatia,Russia, one of theSiberian Turkic languages, has been reconstructed and a Soyot-Buryat-Russiandictionary was published in 2002. The language is currently taught in some elementary schools.[52] TheAinu languageof the indigenousAinu peopleof northern Japan is currently moribund, but efforts are underway to revive it. A 2006 survey of theHokkaidoAinu indicated that only 4.6% of Ainu surveyed were able to converse in or "speak a little" Ainu.[53]As of 2001, Ainu was not taught in any elementary or secondary schools in Japan, but was offered at numerous language centres and universities in Hokkaido, as well as at Tokyo'sChiba University.[54] In China, theManchu languageis one of the most endangered languages, with speakers only in three small areas of Manchuria remaining.[55]Some enthusiasts are trying to revive the language oftheir ancestorsusing available dictionaries and textbooks, and even occasional visits toQapqal Xibe Autonomous CountyinXinjiang, where the relatedXibe languageis still spoken natively.[56] In the Philippines, a local variety of Spanish that was primarily based onMexican Spanishwas thelingua francaof the country since Spanish colonization in 1565 and was an official language alongsideFilipino(standardizedTagalog) andEnglishuntil 1987, following the ratification of a new constitution, where it was re-designated as a voluntary language. As a result of its loss as an official language and years of marginalization at the official level during and after American colonization, the use of Spanish amongst the overall populace decreased dramatically and became moribund, with the remaining native speakers left being mostly elderly people.[57][58][59] The language has seen a gradual revival, however, due to official promotion under the administration of former PresidentGloria Macapagal Arroyo.[60][61]Schools were encouraged to offer Spanish, French, and Japanese as foreign language electives.[62]Results were immediate as the job demand for Spanish speakers had increased since 2008.[63]As of 2010, theInstituto Cervantesin Manila reported the number of Spanish-speakers in the country with native or non-native knowledge at approximately 3 million, the figure albeit including those who speak the Spanish-based creoleChavacano.[64] Complementing government efforts is a notable surge of exposure through themainstream mediaand, more recently,music-streamingservices.[65][66] TheWestern Armenianlanguage, has been classified as adefinitely endangered languagein theAtlas of the World's Languages in Danger(2010),[67]as most speakers of the dialect remain in diasporic communities away from their homeland in Anatolia, following theArmenian genocide. In spite of this, there have been various efforts[68]to revitalize the language, especially within theLos Angeles communitywhere the majority of Western Armenians reside. Within her dissertation, Shushan Karapetian discusses at length the decline of the Armenian language in the United States, and new means for keeping and reviving Western Armenian, such as the creation of the Saroyan Committee or the Armenian Language Preservation Committee, launched in 2013.[69]Other attempts at language revitalization can be seen within theUniversity of California in Irvine.[70]Armenian is also one of the languages Los Angeles County is required to provide voting information in.[71]The DPSS (California Department of Social Services) also identifies Armenian as one of its "threshold languages".[72] In Thailand, there exists aChong languagerevitalization project, headed by Suwilai Premsrirat.[73] InEurope, in the 19th and early 20th centuries, the use of both local and learnedlanguagesdeclined as the central governments of the different states imposed their vernacular language as the standard throughout education and official use (this was the case in theUnited Kingdom,France,Spain,ItalyandGreece, and to some extent, inGermanyandAustria-Hungary).[citation needed] In the last few decades,[when?]localnationalismandhuman rightsmovements have made a moremulticulturalpolicy standard in European states; sharp condemnation of the earlier practices of suppressing regional languages was expressed in the use of such terms as "linguicide". InFrancoist Spain,Basque languageuse was discouraged by the government'srepressive policies. In the Basque Country, "Francoist repression was not only political, but also linguistic and cultural."[74]Franco'sregime suppressed Basque from official discourse, education, and publishing,[75]making it illegal to register newborn babies under Basque names,[76]and even requiring tombstone engravings in Basque to be removed.[77]In some provinces the public use of Basque was suppressed, with people fined for speaking it.[78]Public use of Basque was frowned upon by supporters of the regime, often regarded as a sign of anti-Francoism orseparatism.[79]in the late 1960s. Since 1968, Basque has been immersed in a revitalisation process, facing formidable obstacles. However, significant progress has been made in numerous areas. Six main factors have been identified to explain its relative success: While those six factors influenced the revitalisation process, the extensive development and use oflanguage technologiesis also considered a significant additional factor.[81]Overall, in the 1960s and later, the trend reversed and education and publishing in Basque began to flourish.[82]A sociolinguistic survey shows that there has been a steady increase in Basque speakers since the 1990s, and the percentage of young speakers exceeds that of the old.[83] One of the best known European attempts at language revitalization concerns theIrish language. While English is dominant through most of Ireland, Irish, aCeltic language, is still spoken in certain areas calledGaeltachtaí,[84]but there it is in serious decline.[85]The challenges faced by the language over the last few centuries have included exclusion from important domains, social denigration, the death or emigration of many Irish speakers during theIrish famineof the 1840s, and continued emigration since. Efforts to revitalise Irish were being made, however, from the mid-1800s, and were associated with a desire for Irish political independence.[84]Contemporary Irish language revitalization has chiefly involved teaching Irish as a compulsory language in mainstream English-speaking schools. But the failure to teach it in an effective and engaging way means (as linguist Andrew Carnie notes) that students do not acquire the fluency needed for the lasting viability of the language, and this leads to boredom and resentment. Carnie also noted a lack of media in Irish (2006),[84]though this is no longer the case. The decline of the Gaeltachtaí and the failure of state-directed revitalisation have been countered by an urban revival movement. This is largely based on an independent community-based school system, known generally asGaelscoileanna. These schools teach entirely through Irish and their number is growing, with over thirty such schools in Dublin alone.[86]They are an important element in the creation of a network of urban Irish speakers (known as Gaeilgeoirí), who tend to be young, well-educated and middle-class. It is now likely that this group has acquired critical mass, a fact reflected in the expansion of Irish-language media.[87]Irish language television has enjoyed particular success.[88]It has been argued that they tend to be better educated than monolingual English speakers and enjoy higher social status.[89]They represent the transition of Irish to a modern urban world, with an accompanying rise in prestige. There are also current attempts to revive the related language ofScottish Gaelic, which was suppressed following the formation of the United Kingdom, and entered further decline due to theHighland clearances. Currently[when?], Gaelic is only spoken widely in theWestern Islesand some relatively small areas of theHighlands and Islands. The decline in fluent Gaelic speakers has slowed; however, the population center has shifted to L2 speakers in urban areas, especially Glasgow.[90][91] Another Celtic language,Manx, lost itslast native speakerin 1974 and was declared extinct byUNESCOin 2009, but never completely fell from use.[92]The language is now taught in primary and secondary schools, including as a teaching medium at theBunscoill Ghaelgagh, used in some public events and spoken as a second language by approximately 1,800 people.[93]Revitalization efforts include radio shows in Manx Gaelic and social media and online resources. The Manx government has also been involved in the effort by creating organizations such as the Manx Heritage Foundation (Culture Vannin) and the position of Manx Language Officer.[94]The government has released an official Manx Language Strategy for 2017–2021.[95] There have been a number of attempts to revive theCornish language, both privately and some under theCornish Language Partnership. Some of the activities have included translation of the Christian scriptures,[96]a guild of bards,[97]and the promotion ofCornish literaturein modern Cornish, including novels and poetry. TheRomaniarriving in the Iberian Peninsula developed an IberianRomanidialect. As time passed, Romani ceased to be a full language and becameCaló, acantmixing Iberian Romance grammar and Romani vocabulary. With sedentarization and obligatory instruction in the official languages, Caló is used less and less. As Iberian Romani proper is extinct and as Caló is endangered, some people are trying to revitalise the language. The Spanish politicianJuan de Dios Ramírez HerediapromotesRomanò-Kalò, a variant ofInternational Romani, enriched by Caló words.[98]His goal is to reunify the Caló and Romani roots. The Livonian language, a Finnic language, once spoken on about a third of modern-day Latvian territory,[99]died in the 21st century with the death of the last native speakerGrizelda Kristiņaon 2 June 2013.[100]Today there are about 210 people mainly living in Latvia who identify themselves as Livonian and speak the language on the A1-A2 level according to the Common European Framework of Reference for Languages and between 20 and 40 people who speak the language on level B1 and up.[101]Today all speakers learn Livonian as a second language. There are different programs educating Latvians on the cultural and linguistic heritage of Livonians and the fact that most Latvians have common Livonian descent.[102] Programs worth mentioning include: The Livonian linguistic and cultural heritage is included in the Latvian cultural canon[109]and the protection, revitalization and development of Livonian as an indigenous language is guaranteed by Latvian law[110] A few linguists and philologists are involved in reviving a reconstructed form of the extinctOld Prussian languagefrom Luther's catechisms, the Elbing Vocabulary, place names, and Prussian loanwords in theLow Prussian dialectofLow German. Several dozen people use the language inLithuania,Kaliningrad, andPoland, including a few children who are natively bilingual.[111] The Prusaspirā Society has published its translation ofAntoine de Saint-Exupéry'sThe Little Prince. The book was translated by Piotr Szatkowski (Pīteris Šātkis) and released in 2015.[112]The other efforts of Baltic Prussian societies include the development of online dictionaries, learning apps and games. There also have been several attempts to produce music with lyrics written in the revived Baltic Prussian language, most notably in the Kaliningrad Oblast byRomowe Rikoito,[113]Kellan and Āustras Laīwan, but also in Lithuania byKūlgrindain their 2005 albumPrūsų Giesmės(Prussian Hymns),[114]and inLatviaby Rasa Ensemble in 1988[115]andValdis Muktupāvelsin his 2005oratorio"Pārcēlātājs Pontifex" featuring several parts sung in Prussian.[116] Important in this revival wasVytautas Mažiulis, who died on 11 April 2009, and his pupilLetas Palmaitis, leader of the experiment and author of the websitePrussian Reconstructions.[117]Two late contributors were Prāncis Arellis (Pranciškus Erelis), Lithuania, and Dailūns Russinis (Dailonis Rusiņš), Latvia. After them,Twankstas GlabbisfromKaliningrad oblastandNērtiks Pamedīnsfrom East-Prussia, now PolishWarmia-Masuriaactively joined.[citation needed] TheYola languagerevival movement has cultivated in Wexford in recent years, and the “Gabble Ing Yola” resource center for Yola materials claims there are around 140 speakers of the Yola language today.[118] The European colonization of Australia, and the consequent damage sustained byAboriginalcommunities, had a catastrophic effect on indigenous languages, especially in the southeast and south of the country, leaving some with no living traditional native speakers. A number of Aboriginal communities inVictoriaand elsewhere are now trying to revive some of theAboriginal Australian languages. The work is typically directed by a group ofAboriginal eldersand other knowledgeable people, with community language workers doing most of the research and teaching. They analyze the data, develop spelling systems and vocabulary and prepare resources. Decisions are made in collaboration. Some communities employ linguists, and there are also linguists who have worked independently,[119]such asLuise HercusandPeter K. Austin. One of the best cases of relative success in language revitalization is the case ofMaori, also known aste reo Māori. It is the ancestral tongue of the indigenous Maori people of New Zealand and a vehicle for prose narrative, sung poetry, and genealogical recital.[127]The history of the Maori people is taught in Maori in sacred learning houses through oral transmission. Even after Maori became a written language, the oral tradition was preserved.[127] Once European colonization began, many laws were enacted in order to promote the use of English over Maori among indigenous people.[127]The Education Ordinance Act of 1847 mandated school instruction in English and established boarding schools to speed up assimilation of Maori youths into European culture. The Native School Act of 1858 forbade Māori from being spoken in schools. During the 1970s, a group of young Maori people, theNgā Tamatoa, successfully campaigned for Maori to be taught in schools.[127]Also, Kōhanga Reo, Māori language preschools, called language nests, were established.[128]The emphasis was on teaching children the language at a young age, a very effective strategy for language learning. The Maori Language Commission was formed in 1987, leading to a number of national reforms aimed at revitalizing Maori.[127]They include media programmes broadcast in Maori, undergraduate college programmes taught in Maori, and an annual Maori language week. Eachiwi(tribe) created a language planning programme catering to its specific circumstances. These efforts have resulted in a steady increase in children being taught in Maori in schools since 1996.[127] On six of the seven inhabited islands ofHawaii, Hawaiian was displaced by English and is no longer used as the daily language of communication. The one exception isNiʻihau, whereHawaiianhas never been displaced, has never been endangered, and is still used almost exclusively. Efforts to revive the language have increased in recent decades. Hawaiianlanguage immersionschools are now open to children whose families want to retain (or introduce) Hawaiian language into the next generation. The localNational Public Radiostation features a short segment titled "Hawaiian word of the day". Additionally, the Sunday editions of theHonolulu Star-Bulletinand its successor, theHonolulu Star-Advertiser, feature a brief article calledKauakūkalahale, written entirely in Hawaiian by a student.[129] Language revitalization efforts are ongoing around the world. Revitalization teams are utilizing modern technologies to increase contact with indigenous languages and to recordtraditional knowledge. In Mexico, theMixtecpeople'slanguageheavily revolves around the interaction between climate, nature, and what it means for their livelihood.[citation needed]UNESCO's LINKS (Local and Indigenous Knowledge) program recently underwent a project to create a glossary of Mixtec terms and phrases related to climate. UNESCO believes that the traditional knowledge of the Mixtec people via their deep connection with weather phenomena can provide insight on ways toaddress climate change. Their intention in creating the glossary is to "facilitate discussions between experts and the holders of traditional knowledge".[130] In Canada, theWapikoni Mobileproject travels to indigenous communities and provides lessons in film making. Program leaders travel across Canada with mobile audiovisual production units, and aim to provide indigenous youth with a way to connect with their culture through a film topic of their choosing. The Wapikona project submits its films to events around the world as an attempt to spread knowledge of indigenous culture and language.[131] Of the youth in Rapa Nui (Easter Island), ten percent learn their mother language. The rest of the community has adopted Spanish in order to communicate with the outside world and support its tourism industry. Through a collaboration between UNESCO and the ChileanCorporación Nacional de Desarrollo Indigena, the Department of Rapa Nui Language and Culture at the Lorenzo Baeza Vega School was created. Since 1990, the department has created primary education texts in theRapa Nui language. In 2017, the Nid Rapa Nui, anon-governmental organizationwas also created with the goal of establishing a school that teaches courses entirely in Rapa Nui.[132] Language revitalisation has been linked to increased health outcomes for Indigenous communities involved in reclaiming traditional language. Benefits range from improved mental health for community members, increasing connectedness to culture, identity, and a sense of wholeness. Indigenous languages are a core element in the formation of identity, providing pathways for cultural expression, agency, spiritual and ancestral connection.[133]Connection to culture is considered to play an important role in childhood development,[134]and is a UN convention right.[135] Colonisation and subsequent linguicide carried out through policies such as those that created Australia'sStolen Generationshave damaged this connection. It has been proposed that language revitalization may play an important role in counteringintergenerational traumathat has been caused.[136]Researchers at theUniversity of AdelaideandSouth Australian Health and Medical Research Institutehave found that language revitalisation ofAboriginal languagesis linked to better mental health.[137]One study in theBarngarlaCommunity inSouth Australiahas been looking holistically at the positive benefits of language reclamation, healing mental and emotional scars, and building connections to community and country that underpin wellness and wholeness. The study identified the Barngarla peoples' connection to theirlanguageas a strong component of developing a strong cultural and personal identity; the people are as connected to language as they are to culture, and culture is key to their identity.[133]Some proponents claim that language reclamation is a form of empowerment and builds strong connections with community and wholeness.[138] John McWhorterhas argued that programs to reviveindigenous languageswill almost never be very effective because of the practical difficulties involved. He also argues that the death of a language does not necessarily mean the death of a culture. Indigenous expression is still possible even when the original language has disappeared, as with Native American groups and as evidenced by the vitality ofblack American culturein the United States, among people who speak notYorubabut English. He argues that language death is, ironically, a sign of hitherto isolated peoples migrating and sharing space: "To maintain distinct languages across generations happens only amidst unusually tenacious self-isolation—such as that of theAmish—or brutal segregation".[139] Kenan Malikhas also argued that it is "irrational" to try to preserve all the world's languages, as language death is natural and in many cases inevitable, even with intervention. He proposes that language death improves communication by ensuring more people speak the same language. This may benefit the economy and reduce conflict.[140][141] The protection ofminority languagesfrom extinction is often not a concern for speakers of the dominant language. There is often prejudice and deliberate persecution of minority languages, in order to appropriate the cultural and economic capital of minority groups.[142]At other times governments deem that the cost of revitalization programs and creating linguistically diverse materials is too great to take on.[143]
https://en.wikipedia.org/wiki/Language_revitalization
TheGoogle Books Ngram Vieweris an onlinesearch enginethat charts the frequencies of any set of search strings using a yearly count ofn-gramsfound in printed sources published between 1500 and 2022[1][2][3][4]inGoogle'stext corporain English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish.[1][2][5]There are also some specialized English corpora, such as American English, British English, and English Fiction.[6] The program can search for a word or a phrase, including misspellings or gibberish.[5]Then-grams are matched with the text within the selected corpus, and if found in 40 or more books, are then displayed as agraph.[6]The Google Books Ngram Viewer supports searches forparts of speechandwildcards.[6]It is routinely used in research.[7][8] In the development processes, Google teamed up with twoHarvardresearchers, Jean-Baptiste Michel andErez Lieberman Aiden, and quietly released the program on December 16, 2010.[2][9]Before the release, it was difficult to quantify the rate of linguistic change because of the absence of a database that was designed for this purpose, saidSteven Pinker,[10]a well-known linguist who was one of the co-authors of theSciencepaper published on the same day.[1]The Google Books Ngram Viewer was developed in the hope of opening a new window to quantitative research in the humanities field, and the database contained 500 billion words from 5.2 million books publicly available from the very beginning.[2][3][9] The intended audience was scholarly, but the Google Books Ngram Viewer made it possible for anyone with a computer to see a graph that represents thediachronicchange of the use of words and phrases with ease. Lieberman said in response to theNew York Timesthat the developers aimed to provide even children with the ability to browse cultural trends throughout history.[9]In theSciencepaper, Lieberman and his collaborators called the method of high-volume data analysis in digitalized texts "culturomics".[1][9] Commas delimit user-entered search terms, where each comma-separated term is searched in the database as ann-gram (for example, "nursery school" is a 2-gram or bigram).[6]The Ngram Viewer then returns aplottedline chart. Note that due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed.[6] The data sets of the Ngram Viewer have been criticized for their reliance upon inaccurateoptical character recognition(OCR) and for including large numbers of incorrectly dated and categorized texts.[11]Because of these errors, and because they are uncontrolled for bias[12](such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), care must be taken in using the corpora to study language or test theories.[13]Furthermore, the data sets may not reflect general linguistic or cultural change and can only hint at such an effect because they do not involve anymetadatalike date published,[dubious–discuss]author, length, or genre, to avoid any potentialcopyrightinfringements.[14] Systemic errors like the confusion ofsandfin pre-19th century texts (due to the use ofſ, thelongs, which is similar in appearance tof) can cause systemic bias.[13]Although the Google Books team claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.[15][16][better source needed] Guidelines for doing research with data from Google Ngram have been proposed that try to address some of the issues discussed above.[17]
https://en.wikipedia.org/wiki/Google_Books_Ngram_Viewer
Cloud storageis a model ofcomputer data storagein whichdata, said to be on "the cloud", is storedremotelyin logicalpoolsand is accessible to users over a network, typically theInternet. Thephysical storagespans multipleservers(sometimes in multiple locations), and the physical environment is typically owned and managed by acloud computingprovider. These cloud storage providers are responsible for keeping the dataavailableandaccessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data. Cloud storage servicesmay be accessed through acolocatedcloud computingservice, aweb serviceapplication programming interface(API) or by applications that use the API, such ascloud desktopstorage, acloud storage gatewayorWeb-basedcontent management systems. Cloud computing is believed to have been invented byJ. C. R. Lickliderin the 1960s with his work onARPANETto connect people and data from anywhere at any time.[1] In 1983,CompuServeoffered its consumer users a small amount of disk space that could be used to store any files they chose to upload.[2] In 1994,AT&Tlaunched PersonaLink Services, an online platform for personal and business communication and entrepreneurship. The storage was one of the first to be all web-based, and referenced in their commercials as, "you can think of our electronic meeting place as the cloud."[3]Amazon Web Servicesintroduced their cloud storage serviceAmazon S3in 2006, and has gained widespread recognition and adoption as the storage supplier to popular services such asSmugMug,Dropbox, andPinterest. In 2005,Boxannounced an online file sharing and personal cloud content management service for businesses.[4] Cloud storage is based on highly virtualized infrastructure and is like broadercloud computingin terms of interfaces, near-instant elasticity andscalability,multi-tenancy, andmeteredresources. Cloud storage services can be used from an off-premises service (Amazon S3) or deployed on-premises (ViON Capacity Services).[5] There are three types of cloud storage: a hostedobject storageservice,file storage, andblock storage. Each of these cloud storage types offer their own unique advantages. Examples of object storage services that can be hosted and deployed with cloud storage characteristics includeAmazon S3,Oracle Cloud StorageandMicrosoft AzureStorage, object storage software likeOpenstack Swift, object storage systems likeEMC Atmos, EMC ECS and Hitachi Content Platform, and distributed storage research projects like OceanStore[6]and VISION Cloud.[7] Examples of file storage services includeAmazon Elastic File System(EFS) andQumulo Core,[8]used for applications that need access to shared files and require a file system. This storage is often supported with aNetwork Attached Storage(NAS) server, used for large content repositories, development environments, media stores, or user home directories. A block storage service likeAmazon Elastic Block Store(EBS) is used for other enterprise applications like databases and often requires dedicated, low latency storage for each host. This is comparable in certain respects todirect attached storage(DAS) or astorage area network(SAN). Cloud storage is:[6] Outsourcingdata storage increases theattack surface area.[17] There are several options available to avoid security issues. One option is to use a private cloud instead of a public cloud. Another option is to ingest data in an encrypted format where the key is held within the on-premise infrastructure. To this end, access is often by use of on-premisecloud storage gatewaysthat have options to encrypt the data prior to transfer.[21] Companies are not permanent and the services and products they provide can change. Outsourcing data storage to another company needs careful investigation and nothing is ever certain. Contracts set in stone can be worthless when a company ceases to exist or its circumstances change. Companies can:[22][23][24] Typically, cloud storageService Level Agreements(SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues,human errorslike misconfigurations,natural disasters,force majeureevents, orsecurity breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by other services offered within the same provider. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, service providers typically do not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.[26][27][28][29] Hybrid cloud storage is a term for a storage infrastructure that uses a combination of on-premises storage resources with cloud storage. The on-premises storage is usually managed by the organization, while the public cloud storage provider is responsible for the management and security of the data stored in the cloud.[37]Hybrid cloud storage can be implemented by an on-premisescloud storage gatewaythat presents a file system or object storage interface that users can access in the same way they would access a local storage system. The cloud storage gateway transparently transfers the data to and from the cloud storage service, providing low latency access to the data through a local cache.[21] Hybrid cloud storage can be used to supplement an organization's internal storage resources, or it can be used as the primary storage infrastructure. In either case, hybrid cloud storage can provide organizations with greater flexibility and scalability than traditional on-premises storage infrastructure.[37] There are several benefits to using hybrid cloud storage, including the ability tocachefrequently used data on-site for quick access, while inactivecold datais stored off-site in the cloud. This can save space, reduce storage costs and improve performance. Additionally, hybrid cloud storage can provide organizations with greater redundancy and fault tolerance, as data is stored in both on-premises and cloud storage infrastructure.[37]
https://en.wikipedia.org/wiki/Cloud_storage
Thetheory of statisticsprovides a basis for the whole range of techniques, in bothstudy designanddata analysis, that are used within applications ofstatistics.[1][2]The theory covers approaches tostatistical-decisionproblems and tostatistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find the best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.[2][3] Apart from philosophical considerations about how to make statistical inferences and decisions, much of statistical theory consists ofmathematical statistics, and is closely linked toprobability theory, toutility theory, and tooptimization. Statistical theory provides an underlying rationale and provides a consistent basis for the choice of methodology used inapplied statistics. Statistical modelsdescribe the sources of data and can have different types of formulation corresponding to these sources and to the problem being studied. Such problems can be of various kinds: Statistical models, once specified, can be tested to see whether they provide useful inferences for new data sets.[4] Statistical theory provides a guide to comparing methods ofdata collection, where the problem is to generate informative data usingoptimizationandrandomizationwhile measuring and controlling forobservational error.[5][6][7]Optimization of data collection reduces the cost of data while satisfying statistical goals,[8][9]whilerandomizationallows reliable inferences. Statistical theory provides a basis for good data collection and the structuring of investigations in the topics of: The task of summarising statistical data in conventional forms (also known asdescriptive statistics) is considered in theoretical statistics as a problem of defining what aspects of statistical samples need to be described and how well they can be described from a typically limited sample of data. Thus the problems theoretical statistics considers include: Besides the philosophy underlyingstatistical inference, statistical theory has the task of considering the types of questions thatdata analystsmight want to ask about the problems they are studying and of providing data analytic techniques for answering them. Some of these tasks are: When a statistical procedure has been specified in the study protocol, then statistical theory provides well-defined probability statements for the method when applied to all populations that could have arisen from the randomization used to generate the data. This provides an objective way of estimating parameters, estimating confidence intervals, testing hypotheses, and selecting the best. Even for observational data, statistical theory provides a way of calculating a value that can be used to interpret a sample of data from a population, it can provide a means of indicating how well that value is determined by the sample, and thus a means of saying corresponding values derived for different populations are as different as they might seem; however, the reliability of inferences from post-hoc observational data is often worse than for planned randomized generation of data. Statistical theory provides the basis for a number of data-analytic approaches that are common across scientific and social research.Interpreting datais done with one of the following approaches: Many of the standard methods for those approaches rely on certainstatistical assumptions(made in the derivation of the methodology) actually holding in practice. Statistical theory studies the consequences of departures from these assumptions. In addition it provides a range ofrobust statistical techniquesthat are less dependent on assumptions, and it provides methods checking whether particular assumptions are reasonable for a given data set.
https://en.wikipedia.org/wiki/Statistical_theory
InIndian mathematics, aVedicsquareis a variation on a typical 9 × 9multiplication tablewhere the entry in each cell is thedigital rootof the product of the column and row headings i.e. theremainderwhen the product of the row and column headings is divided by 9 (with remainder 0 represented by 9). Numerousgeometricpatternsandsymmetriescan be observed in a Vedic square, some of which can be found in traditionalIslamic art. The Vedic Square can be viewed as the multiplication table of themonoid((Z/9Z)×,{1,∘}){\displaystyle ((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})}whereZ/9Z{\displaystyle \mathbb {Z} /9\mathbb {Z} }is the set of positive integers partitioned by theresidue classesmodulonine. (the operator∘{\displaystyle \circ }refers to the abstract "multiplication" between the elements of this monoid). Ifa,b{\displaystyle a,b}are elements of((Z/9Z)×,{1,∘}){\displaystyle ((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})}thena∘b{\displaystyle a\circ b}can be defined as(a×b)mod9{\displaystyle (a\times b)\mod {9}}, where the element 9 is representative of the residue class of 0 rather than the traditional choice of 0. This does not form agroupbecause not every non-zero element has a correspondinginverse element; for example6∘3=9{\displaystyle 6\circ 3=9}but there is noa∈{1,⋯,9}{\displaystyle a\in \{1,\cdots ,9\}}such that9∘a=6.{\displaystyle 9\circ a=6.}. The subset{1,2,4,5,7,8}{\displaystyle \{1,2,4,5,7,8\}}forms acyclic groupwith 2 as one choice ofgenerator- this is the group of multiplicativeunitsin theringZ/9Z{\displaystyle \mathbb {Z} /9\mathbb {Z} }. Every column and row includes all six numbers - so this subset forms aLatin square. A Vedic cube is defined as the layout of eachdigital rootin a three-dimensionalmultiplication table.[2] Vedic squares with a higherradix(or number base) can be calculated to analyse the symmetric patterns that arise. Using the calculation above,(a×b)mod(base−1){\displaystyle (a\times b)\mod {({\textrm {base}}-1)}}. The images in this section are color-coded so that the digital root of 1 is dark and the digital root of (base-1) is light.
https://en.wikipedia.org/wiki/Vedic_square
Accelerationismis a range ofrevolutionaryandreactionaryideologies that call for the drastic intensification ofcapitalistgrowth,technological change, and other processes of social change to destabilize existing systems and create radical social transformations, referred to as "acceleration".[1][2][3][4][5]It has been regarded as an ideological spectrum divided into mutually contradictoryleft-wingandright-wingvariants, both of which support dramatic changes to capitalism and its structures as well as the conditions for atechnological singularity, a hypothetical point in time at which technological growth becomes uncontrollable and irreversible.[6][7][8][9]It aims to analyze and subsequently promote the social, economic, cultural, andlibidinalforces that constitute the process of acceleration.[10][6] Ideas such asGilles DeleuzeandFélix Guattari's concept ofdeterritorialization,Jean Baudrillard's proposals for "fatal strategies", and various ideas ofNick Landare crucial influences on accelerationism. Such ideas gave rise to theCybernetic Culture Research Unit(CCRU), a philosophy collective at theUniversity of Warwick, in the 1990s, promoting the use of capitalism to dissolve existing social structures and reach a singularity. In the late 2000s and early 2010s, the movement would gain a resurgence, producing numerous variants and interpretations as well as a few published works. The term has also, in a manner strongly distinguished from original accelerationist theorists, been used byright-wing extremistssuch asneo-fascists,neo-Nazis,white nationalistsandwhite supremaciststo increasingly refer to an "acceleration" of racial conflict throughassassinations,murdersandterrorist attacksas a means to violently achieve awhite ethnostate.[11][12][13][14] The term "accelerationism" was first used in sci-fi authorRoger Zelazny's third novel, 1967'sLord of Light.[1][15]It was later popularized by professor and authorBenjamin Noysin his 2010 bookThe Persistence of the Negativeto describe the trajectory of certainpost-structuralistswho embraced unorthodoxMarxistand counter-Marxist overviews of capitalist growth, such as Gilles Deleuze and Félix Guattari in their 1972 bookAnti-Oedipus,Jean-François Lyotardin his 1974 bookLibidinal Economyand Jean Baudrillard in his 1976 bookSymbolic Exchange and Death.[16] English right-wing philosopher and writer Nick Land, commonly credited with creating and inspiring accelerationism's basic ideas and concepts,[1][17]cited a number of philosophers who expressed anticipatory accelerationist attitudes in his 2017 essay "A Quick-and-Dirty Introduction to Accelerationism".[18][19]Firstly,Friedrich Nietzscheargued in a fragment inThe Will to Powerthat "theleveling processof European man is the great process which should not be checked: one should even accelerate it."[20]Taking inspiration from this notion forAnti-Oedipus, Deleuze and Guattari speculated further on an unprecedented "revolutionary path" to perpetuate capitalism's tendencies that would later become a central idea of accelerationism: But which is the revolutionary path? Is there one?—To withdraw from the world market, asSamir Aminadvises Third World countries to do, in a curious revival of the fascist "economic solution"? Or might it be to go in the opposite direction? To go still further, that is, in the movement of the market, of decoding and deterritorialization? For perhaps the flows are not yet deterritorialized enough, not decoded enough, from the viewpoint of a theory and a practice of a highly schizophrenic character. Not to withdraw from the process, but to go further, to "accelerate the process," as Nietzsche put it: in this matter, the truth is that we haven't seen anything yet. Land also citedKarl Marx, who, in his 1848 speech "On the Question of Free Trade", anticipated accelerationist principles a century before Deleuze and Guattari by describingfree tradeas socially destructive and fuellingclass conflict, then effectively arguing for it: But, in general, the protective system of our day is conservative, while the free trade system is destructive. It breaks up old nationalities and pushes the antagonism of the proletariat and the bourgeoisie to the extreme point. In a word, the free trade system hastens the social revolution. It is in this revolutionary sense alone, gentlemen, that I vote in favor of free trade. Nick Srnicekand Alex Williams, prominent left accelerationists, additionally creditVladimir Leninwith recognizing capitalist progress as important in the subsequent functioning ofsocialism:[7][23] Socialism is inconceivable without large-scale capitalist engineering based on the latest discoveries of modern science. It is inconceivable without planned state organisation which keeps tens of millions of people to the strictest observance of a unified standard in production and distribution. We Marxists have always spoken of this, and it is not worth while wasting two seconds talking to people who do not understand even this (anarchistsand a good half of theLeft Socialist-Revolutionaries). Robin Mackay, co-editor of#Accelerate: The Accelerationist Readerand a former CCRU member, additionally citesRussian cosmism,science fiction(particularlyTerminator,Predator, andBlade Runner),cyberpunk, 90's cyberculture, andelectronic musicas influences on the movement.[6]Iain Hamilton Grant, another former CCRU member, stated "Neuromancergot into the philosophy department, and it went viral. You’d find worn-out paperbacks all over the common room.”[1] TheCybernetic Culture Research Unit(CCRU), a philosophy collective at theUniversity of Warwickwhich included Land, Mackay, and Grant, was one of the most significant parts of the movement.[1][24][6][25]Mark Fisher, another former member, described the CCRU's accelerationism as “a kind of exuberant anti-politics, a ‘technihilo' celebration of the irrelevance of human agency, partly inspired by the pro-markets, anti-capitalism line developed byManuel DeLandaout ofBraudel, and from the section of Anti-Oedipus that talks about marketization as the 'revolutionary path'."[26]Other significant members includeSadie PlantandRay Brassier. The group stood in stark opposition to the University of Warwick and traditional left-wing academia,[1][26]with Mackay stating "I don’t think Land has ever pretended to be left-wing! He’s a serious philosopher and an intelligent thinker, but one who has always loved to bait the left by presenting the ‘worst’ possible scenario with great delight…!"[6]As Land became a stronger influence on the group and left the University of Warwick, they would shift to more unorthodox andoccultideas. Land suffered abreakdownfrom hisamphetamine abuseand disappeared in the early 2000s, with the CCRU vanishing along with him.[1] The Guardianhas referred to#Accelerate: The Accelerationist Reader,a 2014 anthology edited by Robin Mackay andArmen Avanessian, as "the only proper guide to the movement in existence." They also describedFanged Noumena,a 2011 anthology of Land's work, as “contain[ing] some of accelerationism's most darkly fascinating passages."[1]In 2015, Urbanomic and Time Spiral Press publishedWritings 1997-2003as a complete collection of known texts published under the CCRU name, besides those that have been irrecoverably lost or attributed to a specific member. However, it is not actually complete, as some known works under the CCRU name are not included, such as those in#Accelerate: The Accelerationist Reader.[27][28] In "A Quick-and-Dirty Introduction to Accelerationism", Land attributed the increasing speed of the modern world, along with the associated decrease in time available to think and make decisions about its events, to unregulated capitalism and its ability to exponentially grow and self-improve, describing capitalism as "a positive feedback circuit, within whichcommercializationandindustrializationmutually excite each other in a runaway process." He argued that the best way to deal with capitalism is to participate more to foster even greater exponential growth and self-improvement viacreative destruction, accelerating technological progress along with it. Land also argued that such acceleration is intrinsic to capitalism but impossible for non-capitalist systems, stating that "capital revolutionizes itself more thoroughly than any extrinsic 'revolution' possibly could."[19]In an interview withVox, he stated "Modernity has Capitalism (the self-escalating techno-commercial complex) as its motor. Our question was what ‘the process’ wants (i.e. spontaneously promotes) and what resistances it provokes." He also said that “the assumption” behind accelerationism was that “the general direction of [techno-capitalist] self-escalating change was towarddecentralization.”[25]Mackay summarized Land's position as "since capitalism tends to dissolve hereditary social forms and restrictions [...], it is seen as the engine of exploration into the unknown. So to be ‘on the side of intelligence’ is to totally abandon all caution with respect to the disintegrative processes of capital and whatever reprocessing of the human and of the planet they might involve."[6]This view has been referred to as "right-accelerationism".[6][19] Vincent Le considers Land's philosophy to opposeanthropocentrism, citing his early critique oftranscendental idealismand capitalism in "Kant, Capital, and the Prohibition of Incest". According to Le, Land opposes philosophies which deny a reality beyond humans' conceptual experience, instead viewingdeathas a way to graspthe Real. This would remain as Land's views on capitalism changed after readingDeleuze and Guattari, with Le stating "Although the mature Land abandons his left-wing critique of capitalism by immersing himself in the study ofcybernetics, he will never shake his contempt for anthropocentrism, and his remedy that philosophers can only access the true at the edge of our humanity.[29] In “Meltdown”, a CCRU work and one of the writings compiled inFanged Noumena, Land envisioned a technocapital singularity inChina, resulting in revolutions inartificial intelligence,human enhancement,biotechnology, andnanotechnology. This upends the previous status quo, and the formerfirst world countriesstruggle to maintain control and stop the singularity, verging oncollapse. He described newanti-authoritarianmovements performing a bottom-up takeover of institutions through means likebiological warfareenhanced withDNA computing. He claimed that capitalism's tendency towards optimization of itself and technology, in service ofconsumerism, will lead to the enhancement and eventuallyreplacement of humanity with technology, asserting that "nothing human makes it out of the near-future." Eventually, the self-development of technology will culminate in the "melting [of]Terrainto a seething K-pulp (which unlikegrey goosynthesizesmicrobial intelligenceas it proliferates)." He also criticized traditional philosophy as tending towardsdespotism, instead praisingDeleuzoguattarianschizoanalysisas "already engaging with nonlinear nano-engineering runaway in 1972."[30][31]Le states that Land embraces human extinction in the singularity, as the resulting hyperintelligent AI will come to fully comprehend and embody the Real of thebody without organs, free of human distortions of reality.[29] Land has continually praisedChina's economic policyas being accelerationist, moving toShanghaiand working as a journalist writing material that has been characterized as pro-government propaganda.[1][30][31][25]He has also spoken highly ofDeng XiaopingandSingapore'sLee Kuan Yew,[25]calling Lee an "autocratic enabler of freedom."[32]Yuk Huistated "Land’s celebration of Asian cities such as Shanghai,Hong Kong, and Singapore is simply a detached observation of these places that projects onto them a common will to sacrifice politics for productivity."[33] Land's involvement in theneoreactionarymovement has contributed to his views on accelerationism. InThe Dark Enlightenment, he advocates for a form of capitalistmonarchism, with states controlled by aCEO. He viewsdemocraticandegalitarianpolicies as only slowing down acceleration and the technocapital singularity, stating "Beside thespeed machine, or industrial capitalism, there is an ever more perfectly weighted decelerator [...] comically, the fabrication of this braking mechanism is proclaimed asprogress. It is the Great Work of the Left.”[25][34]He has advocated for accelerationists to support the neoreactionary movement, though many have distanced themselves from him in response to his views on race.[1] Left-wing accelerationism (also referred to as left-accelerationism or L/Acc) is often attributed to Mark Fisher.[35]Left-wing accelerationism seeks to explore, in an orthodox and conventional manner, how modern society has the momentum to create futures that are equitable and liberatory.[36][failed verification]While both strands of accelerationist thinking remain rooted in a similar range of thinkers, left accelerationism appeared with the intent to use technology for the goal of achieving an egalitarian future.[35][34]Fisher, writing on his blogk-punk, had become increasingly disillusioned with capitalism as an accelerationist,[1]citing working in the public sector inBlairiteBritain, being a teacher and trade union activist, and an encounter with Slovenian philosopherSlavoj Žižek, whom he considered to be using similar concepts to the CCRU but from a leftist perspective.[26]At the same time, he became frustrated with traditional left wing politics, believing they were ignoring technology that they could exploit.[1] In "Terminator vs Avatar", Fisher claimed that while Marxists criticizedLibidinal Economyfor asserting that workers enjoyed the upending of primitive social orders, nobody truly wants to return to those. Therefore, rather than reverting to pre-capitalism, society must move through and beyond capitalism. Fisher praised Land's attacks on the academic left, describing the academic left as "careerist sandbaggers" and "a ruthless protection ofpetit bourgeoisinterests dressed up as politics." He also critiqued Land's interpretation of Deleuze and Guattari, stating that while superior in many ways, "his deviation from their understanding of capitalism is fatal" in assuming noreterritorialization, resulting in not foreseeing that capitalism provides "a simulation of innovation and newness that cloaks inertia and stasis." CitingFredric Jameson's interpretation ofThe Communist Manifestoas "see[ing] capitalism as the most productive moment of history and the most destructive at the same time", he argued for accelerationism as an anti-capitalist strategy, criticizing the left's moral critique of capitalism and their "tendencies towards Canutism" as only helping thenarrative that capitalism is the only viable system.[37] Nick Srnicek befriended Fisher, sharing similar views, and the2008 financial crisis, along with dissatisfaction with the left's "ineffectual" response ofthe Occupy protests, led to Srnicek co-writing "#Accelerate: Manifesto for an Accelerationist Politics" with Alex Williams in 2013.[1][23]They posited that capitalism was the most advanced economic system of its time, but has since stagnated and is now constraining technology, withneoliberalismonly worsening its crises. At the same time, they considered the modern left to be "unable to devise a new political ideological vision" as they are too focused on localism and direct action and cannot adapt to make meaningful change. They advocated using existing capitalist infrastructure as "a springboard to launch towards post-capitalism", taking advantage of capitalist technological and scientific advances to experiment with things like economic modeling in the style ofProject Cybersyn. They also advocated for "collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality" and attaining resources and funding for political infrastructure, contrasting standard leftist political action which they deem ineffective. Moving past the constraints of capitalism would result in a resumption of technological progress, not only creating a more rational society but also "recovering the dreams which transfixed many from the middle of the Nineteenth Century until the dawn of the neoliberal era, of the quest of Homo Sapiens towards expansion beyond the limitations of the earth and our immediate bodily forms."[1][7][23]They expanded further inInventing the Future, which, while dropping the term "accelerationism", pushed forautomation, reduction and distribution of working hours,universal basic income, and diminishment of work ethic.[1][38] Land rebuked its ideas in a 2017 interview withThe Guardian, stating "the notion that self-propelling technology is separable from capitalism is a deep theoretical error."[1] Effective accelerationism (abbreviated to e/acc) takes influence fromeffective altruism, a movement to maximize good by calculating what actions provide the greatest overall/global good and prioritizing those rather than focusing on personal interest/proximity. Proponents advocate for unrestricted technological progress "at all costs", believing thatartificial general intelligencewill solve universal human problems like poverty, war, and climate change, while deceleration and stagnation of technology is agreater riskthan anyposed by AI. For example,James Brusseauadvocates reconfiguring AI ethics to promote acceleration, arguing that problems caused by AI innovation are to be resolved by still more innovation as opposed to limiting or slowing the technology.[39]This contrasts with effective altruism (referred to as "longtermism" to distinguish from e/acc), which tends to consider uncontrolled AI to be the greater existential risk and advocates for government regulation and carefulalignment.[40][41] In a critique, Italian MarxistFranco Berardiconsidered acceleration “the essential feature of capitalist growth” and characterized accelerationism as "point[ing] out the contradictory implications of the process of intensification, emphasizing in particular the instability that acceleration brings into the capitalist system." However, he also stated “my answer to the question of whether acceleration marks a final collapse of power is quite simply: no. Because the power of capital is not based on stability.” He posited that the “accelerationist hypothesis” is based on two assumptions: that accelerating production cycles make capitalism unstable, and that potentialities within capitalism will necessarily deploy themselves. He criticized the first by stating “capitalism is resilient because it does not need rational government, only automatic governance”; and the second by arguing that while the possibility exists, it is not guaranteed to happen as it can still be slowed or stopped.[42] Benjamin Noys is a staunch critic of accelerationism, initially calling it "DeleuzianThatcherism".[34]He accuses it of offering false solutions to technological and economic problems, considering those solutions “always promised and always just out of reach."[1][43]He has also said "Capitalism, for the accelerationist, bears down on us as accelerative liquid monstrosity, capable of absorbing us and, for Land, we must welcome this."[34][43] InThe Question Concerning Technology in China,Yuk Huicritiqued accelerationism, particularlyRay Brassier’s “Prometheanism and its Critics” from#Accelerate: The Accelerationist Reader, stating “if such a response to technology and capitalism is applied globally, [...] it risks perpetuating a more subtle form of colonialism.” He argues that accelerationism tries to universally apply a western conception of technology based onPrometheusdespite other cultures having different myths and relations to technology.[44]Further critiquingWesternization,globalization, and the loss of non-Western technological thought, he has also referred to Deng Xiaoping as "the world's greatest accelerationist" due to hiseconomic reforms, considering them an acceleration of the modernization process which started in the aftermath of theOpium Warsand intensified with theCultural Revolution.[33]In "A Politics of Intensity: Some Aspects of Acceleration in Simondon and Deleuze", Yuk Hui and Louis Morelle analyzed Deleuze andSimondonfrom an accelerationist perspective.[45] Slavoj Žižek considers accelerationism to be “far too optimistic”, critiquing it as retroactivelydeterministicand contrasting it withFreud’sdeath driveand its lack of a final conclusion. He argues that accelerationism considers just one conclusion of the world’s tendencies and fails to find other “coordinates" of the world order.[46] Benjamin H. Bratton's bookThe Stack: On Software and Sovereigntyhas been described as concerning accelerationist ideas, focusing on how information technology infrastructures undermine modern political geographies and proposing an open-ended "design brief".Tiziana Terranova's "Red Stack Attack!" links Bratton's stack model and left-wing accelerationism.[47] Laboria Cuboniks, a feminist group, advocated for the use of technology forgender abolitionin "Xenofeminism: A Politics for Alienation", which has been described as "regrounding left accelerationism in itscyberfeministantecedents."[48]Aria Dean, proposing an alternative to both right and left accelerationism, synthesizedracial capitalismwith accelerationism in "Notes on Blacceleration", arguing that the binary between humans and capital is already blurred by the scars of theAtlantic slave trade.[49] Since "accelerationism" was coined in 2010, the term has taken on several new meanings. Several commentators have used the labelaccelerationistto describe a controversial political strategy articulated by Slavoj Žižek.[50][51]An often-cited example of this is Žižek's assertion in a November 2016 interview withChannel 4 Newsthat were he an American citizen, he would vote for U.S. presidentDonald Trumpas the candidate more likely to disrupt the politicalstatus quoin that country.[52]Steven Shavirodescribed variants that “embrace the idea that the worse things get, the better the prospect for a revolution to overthrow everything”, though he considers it very rare.[53]Mackay noted a misconception that accelerationism involves a Marxist "acceleration ofcontradictions" within capitalism and stated that no accelerationist authors have advocated such a thing.[6]Chinese dissidentshave referred toXi Jinpingas "Accelerator-in-Chief" (referencing state media calling Deng Xiaoping "Architect-in-Chief of Reform and Opening"), believing that Xi'sauthoritarianismis hastening the demise of theChinese Communist Partyand that, because it is beyond saving, they should allow it to destroy itself in order to create a better future.[54] Despite its originally Marxist philosophical and theoretical interests, since the late 2010s, international networks of neo-fascists, neo-Nazis, White nationalists, and White supremacists have increasingly used the term "accelerationism" to refer to right-wing extremist goals, and have been known to refer to an "acceleration" of racial conflict through violent means such as assassinations, murders, terrorist attacks and eventual societal collapse to achieve the building of a White ethnostate.[12][13][14]Far-right accelerationism has been widely considered as detrimental to public safety.[55]The inspiration for this distinct variation is occasionally cited asAmerican Nazi PartyandNational Socialist Liberation FrontmemberJames Mason's newsletterSiege, where he argued forsabotage,mass killings, and assassinations of high-profile targets to destabilize and destroy the current society, seen as a system upholding aJewishandmulticulturalNew World Order.[12]His works were republished and popularized by theIron MarchforumandAtomwaffen Division, right-wing extremist organizations strongly connected to various terrorist attacks, murders, andassaults.[12][56][57][58]Far-right accelerationists have also been known to attackcritical infrastructure, particularly thepower grid, attempting to cause a collapse of the system orbelieving that 5G was causing COVID-19, with some encouragingpromotion of 5G conspiracy theoriesas easier than convincing potential recruits thatthe Holocaust never happened.[59][60]According to theSouthern Poverty Law Center(SPLC), which tracks hate groups and filesclass action lawsuitsagainst discriminatory organizations and entities, "on the case of white supremacists, the accelerationist set sees modern society as irredeemable and believe it should be pushed to collapse so a fascist society built on ethnonationalism can take its place. What defines white supremacist accelerationists is their belief that violence is the only way to pursue their political goals."[58] Brenton Harrison Tarrant, the perpetrator of the 15 March 2019Christchurch mosque shootingsthat killed 51 people and injured 49 others, strongly encouraged right-wing accelerationism in a section of his manifesto titled "Destabilization and Accelerationism: Tactics". Tarrant's manifesto influencedJohn Timothy Earnest, the perpetrator of both the 24 March 2019Escondido mosque fireat Dar-ul-Arqam Mosque inEscondido, California, and the 27 April 2019Poway synagogue shootingwhich resulted in one dead and three injured; and it also influencedPatrick Crusius, the perpetrator of the 3 August 2019El Paso Walmart shootingthat killed 23 people and injured 23 others. Tarrant and Earnest, in turn, influenced Juraj Krajčík, the perpetrator of the2022 Bratislava shootingthat left dead two patrons of a gay bar.[61][12][25]Sich Battalionurged its members to buy a copy of Tarrant's manifesto, encouraging them to "get inspired" by it.[62] Voxpointed to Land's shift towards neoreactionarism, along with the neoreactionary movement crossing paths with the alt-right as another fringe right wing internet movement, as the likely connection point between far-right racial accelerationism and the term for Land's otherwise unrelated technocapitalist ideas. They cited a 2018 Southern Poverty Law Center investigation which found users on the neo-Nazi blogThe Right Stuffwho cited neoreactionarism as an influence.[25]Land himself became interested in theAtomwaffen-affiliatedtheistic SatanistorganizationOrder of Nine Angles(ONA) which adheres to the ideology of Neo-Nazi terrorist accelerationism, describing the ONA's works as "highly-recommended" in a blog post.[63]Since the 2010s, the political ideology and religious worldview of the Order of Nine Angles, founded by theBritish neo-NazileaderDavid Myattin 1974,[12]have increasingly influencedmilitantneo-fascist and neo-Naziinsurgentgroups associated with right-wing extremist and White supremacist international networks,[12]most notably the Iron March forum.[12]
https://en.wikipedia.org/wiki/Accelerationism
Adeductive classifieris a type ofartificial intelligenceinference engine. It takes as input a set of declarations in aframe languageabout a domain such as medical research or molecular biology. For example, the names ofclasses, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them. If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar totheorem proversin that they take as input and produce output viafirst-order logic. Classifiers originated withKL-ONEframe languages. They are increasingly significant now that they form a part in the enabling technology of theSemantic Web. Modern classifiers leverage theWeb Ontology Language. The models they analyze and generate are calledontologies.[1] A classic problem inknowledge representationfor artificial intelligence is the trade off between theexpressive powerand thecomputational efficiencyof the knowledge representation system. The most powerful form of knowledge representation is first-order logic. However, it is not possible to implement knowledge representation that provides the complete expressive power of first-order logic. Such a representation will include the capability to represent concepts such as the set of all integers which are impossible to iterate through. Implementing an assertion quantified for an infinite set by definition results in an undecidable non-terminating program. However, the problem is deeper than not being able to implement infinite sets. As Levesque demonstrated, the closer a knowledge representation mechanism comes to first-order logic, the more likely it is to result in expressions that require infinite or unacceptably large resources to compute.[2] As a result of this trade-off, a great deal of early work on knowledge representation for artificial intelligence involved experimenting with various compromises that provide a subset of first-order logic with acceptable computation speeds. One of the first and most successful compromises was to develop languages based predominately onmodus ponens, i.e. IF-THEN rules.Rule-based systemswere the predominant knowledge representation mechanism for virtually all earlyexpert systems. Rule-based systems provided acceptable computational efficiency while still providing powerful knowledge representation. Also, rules were highly intuitive to knowledge workers. Indeed, one of the data points that encouraged researchers to develop rule-based knowledge representation was psychological research that humans often represented complex logic via rules.[3] However, after the early success of rule-based systems there arose more pervasive use of frame languages instead of or more often combined with rules. Frames provided a more natural way to represent certain types of concepts, especially concepts in subpart or subclass hierarchies. This led to development of a new kind of inference engine known as a classifier. A classifier could analyze a class hierarchy (also known as anontology) and determine if it was valid. If the hierarchy was invalid the classifier would highlight the inconsistent declarations. For a language to utilize a classifier it required a formal foundation. The first language to successfully demonstrate a classifier was the KL-ONE family of languages. TheLOOM languagefrom ISI was heavily influenced by KL-ONE. LOOM also was influenced by the rising popularity of object-oriented tools and environments. Loom provided a true object-oriented capability (e.g. message passing) in addition to frame language capabilities. Classifiers play a significant role in the vision for the next generation Internet known as the Semantic Web. The Web Ontology Language provides a formalism that can be validated and reasoned on via classifiers such as Hermit and Fact++.[4] The earliest versions of classifiers werelogic theorem provers. The first classifier to work with a frame language was theKL-ONEclassifier.[5][6]A later system built on common lisp was LOOM from the Information Sciences Institute. LOOM provided true object-oriented capabilities leveraging the Common Lisp Object System, along with a frame language.[7]In the Semantic Web theProtegetool fromStanfordprovides classifiers (also known as reasoners) as part of the default environment.[8]
https://en.wikipedia.org/wiki/Deductive_classifier
TheCurtais a hand-heldmechanical calculatordesigned byCurt Herzstark.[1]It is known for its extremely compact design: a small cylinder that fits in the palm of the hand. It was affectionately known as the "pepper grinder" or "peppermill" due to its shape and means of operation; its superficial resemblance to a certain type ofhand grenadealso earned it the nickname "math grenade".[2][failed verification] Curtas were considered the best portable calculators available until they were displaced by electronic calculators in the 1970s.[1] The Curta was conceived byCurt Herzstarkin the 1930s inVienna,Austria. By 1938, he had filed a key patent, covering his complemented stepped drum.[3][4]This single drum replaced the multiple drums, typically around 10 or so, of contemporary calculators, and it enabled not only addition, but subtraction throughnines complementmath, essentially subtracting by adding. The nines' complement math breakthrough eliminated the significant mechanical complexity created when "borrowing" during subtraction. This drum was the key to miniaturizing the Curta. His work on the pocket calculator stopped in 1938 when theNazisforced him and his company to concentrate on manufacturing precision instruments for the German army.[5] Herzstark, the son of a Catholic mother and Jewish father, was taken into custody in 1943 and eventually sent toBuchenwald concentration camp, where he was encouraged to continue his earlier research: While I was imprisoned inside Buchenwald I had, after a few days, told the [people] in the work production scheduling department of my ideas. The head of the department, Mr. Munich said, 'See, Herzstark, I understand you've been working on a new thing, a small calculating machine. Do you know, I can give you a tip. We will allow you to make and draw everything. If it is really worth something, then we will give it to the Führer as a present after we win the war. Then, surely, you will be made an Aryan.' For me, that was the first time I thought to myself, my God, if you do this, you can extend your life. And then and there I started to draw the CURTA, the way I had imagined it. In the camp, Herzstark was able to develop working drawings for a manufacturable device. Buchenwald wasliberated by U.S. troopson 11 April 1945, and by November Herzstark had located a factory in Sommertal, nearWeimar, whose machinists were skilled enough to produce three working prototypes.[6] Sovietforces had arrived in July, and Herzstark feared being sent to Russia, so, later that same month, he fled to Austria. He began to look for financial backers, at the same time filing continuing patents as well as several additional patents to protect his work.Franz Joseph II, Prince of Liechtensteineventually showed interest in the manufacture of the device, and soon a newly formed company, Contina AG Mauren, began production in Liechtenstein. It was not long before Herzstark's financial backers, thinking they had got from him all they needed, conspired to force him out by reducing the value of all of the company's existing stock to zero, including his one-third interest.[1]These were the same people who had earlier elected not to have Herzstark transfer ownership of his patents to the company, so that, should anyone sue, they would be suing Herzstark, not the company, thereby protecting themselves at Herzstark's expense. This ploy now backfired: without the patent rights, they could manufacture nothing. Herzstark was able to negotiate a new agreement, and money continued to flow to him. Curtas were considered the best portable calculators available until they were displaced by electronic calculators in the 1970s.[1]The Curta, however, lives on, being a highly popular collectible, with thousands of machines working just as smoothly as they did at the time of their manufacture.[1][6][7] An estimated 140,000 Curta calculators were made (80,000 Type I and 60,000 Type II). According to Curt Herzstark, the last Curta was produced in 1972.[6] The Curta Type I was sold for $125 in the later years of production, and the Type II was sold for $175. While only 3% of Curtas were returned to the factory for warranty repair,[6]a small, but significant number of buyers returned their Curtas in pieces, having attempted to disassemble them. Reassembling the machine was more difficult, requiring intimate knowledge of the orientation of, and installation order for, each part and sub-assembly, plus special guides designed to hold the pieces in place during assembly. Many identical-looking parts, each with slightly different dimensions, required test fitting and selection as well as special tools to adjust to design tolerances.[8] The machines have a high curiosity value; in 2016 they sold for around US$1,000, but buyers paid as much as US$1,900 for models in pristine condition with notable serial numbers.[5] The Curta's design is a descendant ofGottfried Leibniz'sStepped ReckonerandCharles Thomas'sArithmometer, accumulating values on cogs, which are added or complemented by astepped drummechanism. Numbers are entered using slides (one slide per digit) on the side of the device. Therevolution counterandresult counterreside around the shiftable carriage, at the top of the machine. A single turn of the crank adds the input number to the result counter, at any carriage position, and increments the corresponding digit of the revolution counter. Pulling the crank upwards slightly before turning performs a subtraction instead of an addition. Multiplication, division, and other functions require a series of crank and carriage-shifting operations. The Type I Curta has eight digits for data entry (known as "setting sliders"), a six-digit revolution counter, and an eleven-digit result counter. According to the advertising literature, it weighs only 8 ounces (230 g). Serial number 70154, produced in 1969, weighs 245 grams (8.6 oz). The larger Type II Curta, introduced in 1954, has eleven digits for data entry, an eight-digit revolution counter, and a fifteen-digit result counter.[9] The Curta was popular among contestants insports car ralliesduring the 1960s, 1970s and into the 1980s. Even after the introduction of the electronic calculator for other purposes, they were used in time-speed-distance (TSD) rallies to aid in computation of times to checkpoints, distances off-course and so on, since the early electronic calculators did not fare well with the bounces and jolts of rallying.[1] The Curta was also favored by commercial and general-aviation pilots before the advent of electronic calculators because of its precision and the user's ability to confirm the accuracy of their manipulations via the revolution counter. Because calculations such as weight and balance are critical for safe flight, precise results free of pilot error are essential. The Curta calculator is very popular among collectors and can be purchased on many platforms. The Swiss entrepreneur and collector Peter Regenass holds a large collection of mechanical calculators, among them over 100 Curta calculators. A part of his collections is on display at theEnter MuseuminSolothurn, Switzerland. In 2016 he donated a Curta calculator to theYad VashemMuseum in Jerusalem.[10] The Curta plays a role inWilliam Gibson'sPattern Recognition(2003) as a piece of historic computing machinery as well as a crucial "trade" item. In 2016 a Curta was designed by Marcus Wu that could be produced on a 3D printer.[11]The Curta's fine tolerances were beyond the ability of printer technology of 2017 to produce to scale, so the printed Curta was about the size of a coffee can and weighed about three pounds.[12]
https://en.wikipedia.org/wiki/Curta
Inmathematics,differential algebrais, broadly speaking, the area of mathematics consisting in the study ofdifferential equationsanddifferential operatorsasalgebraic objectsin view of deriving properties of differential equations and operators without computing the solutions, similarly aspolynomial algebrasare used for the study ofalgebraic varieties, which are solution sets ofsystems of polynomial equations.Weyl algebrasandLie algebrasmay be considered as belonging to differential algebra. More specifically,differential algebrarefers to the theory introduced byJoseph Rittin 1950, in whichdifferential rings,differential fields, anddifferential algebrasarerings,fields, andalgebrasequipped with finitely manyderivations.[1][2][3] A natural example of a differential field is the field ofrational functionsin one variable over thecomplex numbers,C(t),{\displaystyle \mathbb {C} (t),}where the derivation is differentiation with respect tot.{\displaystyle t.}More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation. Joseph Rittdeveloped differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations.[4]His efforts led to an initial paperManifolds Of Functions Defined By Systems Of Algebraic Differential Equationsand 2 books,Differential Equations From The Algebraic StandpointandDifferential Algebra.[5][6][2]Ellis Kolchin, Ritt's student, advanced this field and publishedDifferential Algebra And Algebraic Groups.[1] Aderivation∂{\textstyle \partial }on a ringR{\textstyle R}is afunction∂:R→R{\displaystyle \partial :R\to R\,}such that∂(r1+r2)=∂r1+∂r2{\displaystyle \partial (r_{1}+r_{2})=\partial r_{1}+\partial r_{2}}and for everyr1{\displaystyle r_{1}}andr2{\displaystyle r_{2}}inR.{\displaystyle R.} A derivation islinearover the integers since these identities imply∂(0)=∂(1)=0{\displaystyle \partial (0)=\partial (1)=0}and∂(−r)=−∂(r).{\displaystyle \partial (-r)=-\partial (r).} Adifferential ringis acommutative ringR{\displaystyle R}equipped with one or more derivations that commute pairwise; that is,∂1(∂2(r))=∂2(∂1(r)){\displaystyle \partial _{1}(\partial _{2}(r))=\partial _{2}(\partial _{1}(r))}for every pair of derivations and everyr∈R.{\displaystyle r\in R.}[7]When there is only one derivation one talks often of anordinary differential ring; otherwise, one talks of apartial differential ring. Adifferential fieldis a differential ring that is also a field. Adifferential algebraA{\displaystyle A}over a differential fieldK{\displaystyle K}is a differential ring that containsK{\displaystyle K}as a subring such that the restriction toK{\displaystyle K}of the derivations ofA{\displaystyle A}equal the derivations ofK.{\displaystyle K.}(A more general definition is given below, which covers the case whereK{\displaystyle K}is not a field, and is essentially equivalent whenK{\displaystyle K}is a field.) AWitt algebrais a differential ring that contains the fieldQ{\displaystyle \mathbb {Q} }of the rational numbers. Equivalently, this is a differential algebra overQ,{\displaystyle \mathbb {Q} ,}sinceQ{\displaystyle \mathbb {Q} }can be considered as a differential field on which every derivation is thezero function. Theconstantsof a differential ring are the elementsr{\displaystyle r}such that∂r=0{\displaystyle \partial r=0}for every derivation∂.{\displaystyle \partial .}The constants of a differential ring form asubringand the constants of a differentiable field form a subfield.[8]This meaning of "constant" generalizes the concept of aconstant function, and must not be confused with the common meaning of aconstant. In the followingidentities,δ{\displaystyle \delta }is a derivation of a differential ringR.{\displaystyle R.}[9] Aderivation operatororhigher-order derivation[citation needed]is thecompositionof several derivations. As the derivations of a differential ring are supposed to commute, the order of the derivations does not matter, and a derivation operator may be written asδ1e1∘⋯∘δnen,{\displaystyle \delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}},}whereδ1,…,δn{\displaystyle \delta _{1},\ldots ,\delta _{n}}are the derivations under consideration,e1,…,en{\displaystyle e_{1},\ldots ,e_{n}}are nonnegative integers, and the exponent of a derivation denotes the number of times this derivation is composed in the operator. The sumo=e1+⋯+en{\displaystyle o=e_{1}+\cdots +e_{n}}is called theorderof derivation. Ifo=1{\displaystyle o=1}the derivation operator is one of the original derivations. Ifo=0{\displaystyle o=0}, one has theidentity function, which is generally considered as the unique derivation operator of order zero. With these conventions, the derivation operators form afree commutative monoidon the set of derivations under consideration. Aderivativeof an elementx{\displaystyle x}of a differential ring is the application of a derivation operator tox,{\displaystyle x,}that is, with the above notation,δ1e1∘⋯∘δnen(x).{\displaystyle \delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}}(x).}Aproper derivativeis a derivative of positive order.[7] Adifferential idealI{\displaystyle I}of a differential ringR{\displaystyle R}is anidealof the ringR{\displaystyle R}that isclosed(stable) under the derivations of the ring; that is,∂x∈I,{\textstyle \partial x\in I,}for every derivation∂{\displaystyle \partial }and everyx∈I.{\displaystyle x\in I.}A differential ideal is said to beproperif it is not the whole ring. For avoiding confusion, an ideal that is not a differential ideal is sometimes called analgebraic ideal. Theradicalof a differential ideal is the same as itsradicalas an algebraic ideal, that is, the set of the ring elements that have a power in the ideal. The radical of a differential ideal is also a differential ideal. Aradicalorperfectdifferential ideal is a differential ideal that equals its radical.[10]A prime differential ideal is a differential ideal that isprimein the usual sense; that is, if a product belongs to the ideal, at least one of the factors belongs to the ideal. A prime differential ideal is always a radical differential ideal. A discovery of Ritt is that, although the classical theory of algebraic ideals does not work for differential ideals, a large part of it can be extended to radical differential ideals, and this makes them fundamental in differential algebra. The intersection of any family of differential ideals is a differential ideal, and the intersection of any family of radical differential ideals is a radical differential ideal.[11]It follows that, given a subsetS{\displaystyle S}of a differential ring, there are three ideals generated by it, which are the intersections of, respectively, all algebraic ideals, all differential ideals, and all radical differential ideals that contain it.[11][12] The algebraic ideal generated byS{\displaystyle S}is the set of finite linear combinations of elements ofS,{\displaystyle S,}and is commonly denoted as(S){\displaystyle (S)}or⟨S⟩.{\displaystyle \langle S\rangle .} The differential ideal generated byS{\displaystyle S}is the set of the finite linear combinations of elements ofS{\displaystyle S}and of the derivatives of any order of these elements; it is commonly denoted as[S].{\displaystyle [S].}WhenS{\displaystyle S}is finite,[S]{\displaystyle [S]}is generally notfinitely generatedas an algebraic ideal. The radical differential ideal generated byS{\displaystyle S}is commonly denoted as{S}.{\displaystyle \{S\}.}There is no known way to characterize its element in a similar way as for the two other cases. A differential polynomial over a differential fieldK{\displaystyle K}is a formalization of the concept ofdifferential equationsuch that the known functions appearing in the equation belong toK,{\displaystyle K,}and the indeterminates are symbols for the unknown functions. So, letK{\displaystyle K}be a differential field, which is typically (but not necessarily) a field ofrational fractionsK(X)=K(x1,…,xn){\displaystyle K(X)=K(x_{1},\ldots ,x_{n})}(fractions of multivariate polynomials), equipped with derivations∂i{\displaystyle \partial _{i}}such that∂ixi=1{\displaystyle \partial _{i}x_{i}=1}and∂ixj=0{\displaystyle \partial _{i}x_{j}=0}ifi≠j{\displaystyle i\neq j}(the usual partial derivatives). For defining the ringK{Y}=K{y1,…,yn}{\textstyle K\{Y\}=K\{y_{1},\ldots ,y_{n}\}}of differential polynomials overK{\displaystyle K}with indeterminates inY={y1,…,yn}{\displaystyle Y=\{y_{1},\ldots ,y_{n}\}}with derivations∂1,…,∂n,{\displaystyle \partial _{1},\ldots ,\partial _{n},}one introduces an infinity of new indeterminates of the formΔyi,{\displaystyle \Delta y_{i},}whereΔ{\displaystyle \Delta }is any derivation operator of order higher than1. With this notation,K{Y}{\displaystyle K\{Y\}}is the set of polynomials in all these indeterminates, with the natural derivations (each polynomial involves only a finite number of indeterminates). In particular, ifn=1,{\displaystyle n=1,}one has Even whenn=1,{\displaystyle n=1,}a ring of differential polynomials is notNoetherian. This makes the theory of this generalization of polynomial rings difficult. However, two facts allow such a generalization. Firstly, a finite number of differential polynomials involves together a finite number of indeterminates. Its follows that every property of polynomials that involves a finite number of polynomials remains true for differential polynomials. In particular,greatest common divisorsexist, and a ring of differential polynomials is aunique factorization domain. The second fact is that, if the fieldK{\displaystyle K}contains the field of rational numbers, the rings of differential polynomials overK{\displaystyle K}satisfy theascending chain conditionon radical differential ideals. This Ritt’s theorem is implied by its generalization, sometimes called theRitt-Raudenbush basis theoremwhich asserts that ifR{\displaystyle R}is aRitt Algebra(that, is a differential ring containing the field of rational numbers),[13]that satisfies the ascending chain condition on radical differential ideals, then the ring of differential polynomialsR{y}{\displaystyle R\{y\}}satisfies the same property (one passes from the univariate to the multivariate case by applying the theorem iteratively).[14][15] This Noetherian property implies that, in a ring of differential polynomials, every radical differential idealIis finitely generated as a radical differential ideal; this means that there exists a finite setSof differential polynomials such thatIis the smallest radical differential ideal containingS.[16]This allows representing a radical differential ideal by such a finite set of generators, and computing with these ideals. However, some usual computations of the algebraic case cannot be extended. In particular no algorithm is known for testing membership of an element in a radical differential ideal or the equality of two radical differential ideals. Another consequence of the Noetherian property is that a radical differential ideal can be uniquely expressed as the intersection of a finite number of prime differential ideals, calledessential prime componentsof the ideal.[17] Elimination methodsare algorithms that preferentially eliminate a specified set of derivatives from a set of differential equations, commonly done to better understand and solve sets of differential equations. Categories of elimination methods includecharacteristic setmethods, differentialGröbner basesmethods andresultantbased methods.[1][18][19][20][21][22][23] Common operations used in elimination algorithms include 1) ranking derivatives, polynomials, and polynomial sets, 2) identifying a polynomial's leading derivative, initial and separant, 3) polynomial reduction, and 4) creating special polynomial sets. Therankingof derivatives is atotal orderand anadmisible order, defined as:[24][25][26] Each derivative has an integer tuple, and amonomial orderranks the derivative by ranking the derivative's integer tuple. The integer tuple identifies the differential indeterminate, the derivative's multi-index and may identify the derivative's order. Types of ranking include:[27] In this example, the integer tuple identifies the differential indeterminate and derivative's multi-index, andlexicographic monomial order,≥lex{\textstyle \geq _{\text{lex}}}, determines the derivative's rank.[28] This is the standard polynomial form:p=ad⋅upd+ad−1⋅upd−1+⋯+a1⋅up+a0{\displaystyle p=a_{d}\cdot u_{p}^{d}+a_{d-1}\cdot u_{p}^{d-1}+\cdots +a_{1}\cdot u_{p}+a_{0}}.[24][28] Separant set isSA={Sp∣p∈A}{\displaystyle S_{A}=\{S_{p}\mid p\in A\}}, initial set isIA={Ip∣p∈A}{\displaystyle I_{A}=\{I_{p}\mid p\in A\}}and combined set isHA=SA∪IA{\textstyle H_{A}=S_{A}\cup I_{A}}.[29] Partially reduced(partial normal form) polynomialq{\textstyle q}with respect to polynomialp{\textstyle p}indicates these polynomials are non-ground field elements,p,q∈K{Y}∖K{\textstyle p,q\in {\mathcal {K}}\{Y\}\setminus {\mathcal {K}}}, andq{\displaystyle q}contains no proper derivative ofup{\displaystyle u_{p}}.[30][31][29] Partially reduced polynomialq{\textstyle q}with respect to polynomialp{\textstyle p}becomesreduced(normal form) polynomialq{\textstyle q}with respect top{\textstyle p}if the degree ofup{\textstyle u_{p}}inq{\textstyle q}is less than the degree ofup{\textstyle u_{p}}inp{\textstyle p}.[30][31][29] Anautoreducedpolynomial set has every polynomial reduced with respect to every other polynomial of the set. Every autoreduced set is finite. An autoreduced set istriangularmeaning each polynomial element has a distinct leading derivative.[32][30] Ritt's reduction algorithmidentifies integersiAk,sAk{\textstyle i_{A_{k}},s_{A_{k}}}and transforms a differential polynomialf{\textstyle f}usingpseudodivisionto a lower or equally ranked remainder polynomialfred{\textstyle f_{red}}that is reduced with respect to the autoreduced polynomial setA{\textstyle A}. The algorithm's first step partially reduces the input polynomial and the algorithm's second step fully reduces the polynomial. The formula for reduction is:[30] SetA{\textstyle A}is adifferential chainif the rank of the leading derivatives isuA1<⋯<uAm{\textstyle u_{A_{1}}<\dots <u_{A_{m}}}and∀i,Ai{\textstyle \forall i,\ A_{i}}is reduced with respect toAi+1{\textstyle A_{i+1}}[33] Autoreduced setsA{\textstyle A}andB{\textstyle B}each contain ranked polynomial elements. This procedure ranks two autoreduced sets by comparing pairs of identically indexed polynomials from both autoreduced sets.[34] Acharacteristic setC{\textstyle C}is thelowest rankedautoreduced subset among all the ideal's autoreduced subsets whose subset polynomial separants are non-members of the idealI{\textstyle {\mathcal {I}}}.[35] Thedelta polynomialapplies to polynomial pairp,q{\textstyle p,q}whose leaders share a common derivative,θαup=θβuq{\textstyle \theta _{\alpha }u_{p}=\theta _{\beta }u_{q}}. The least common derivative operator for the polynomial pair's leading derivatives isθpq{\textstyle \theta _{pq}}, and the delta polynomial is:[36][37] Acoherent setis a polynomial set that reduces its delta polynomial pairs to zero.[36][37] Aregular systemΩ{\textstyle \Omega }contains a autoreduced and coherent set of differential equationsA{\textstyle A}and a inequation setHΩ⊇HA{\textstyle H_{\Omega }\supseteq H_{A}}with setHΩ{\textstyle H_{\Omega }}reduced with respect to the equation set.[37] Regular differential idealIdif{\textstyle {\mathcal {I}}_{\text{dif}}}and regular algebraic idealIalg{\textstyle {\mathcal {I}}_{\text{alg}}}aresaturation idealsthat arise from a regular system.[37]Lazard's lemmastates that the regular differential and regular algebraic ideals are radical ideals.[38] TheRosenfeld–Gröbner algorithmdecomposes the radical differential ideal as a finite intersection of regular radical differential ideals. These regular differential radical ideals, represented by characteristic sets, are not necessarily prime ideals and the representation is not necessarilyminimal.[39] Themembership problemis to determine if a differential polynomialp{\textstyle p}is a member of an ideal generated from a set of differential polynomialsS{\textstyle S}. The Rosenfeld–Gröbner algorithm generates sets of Gröbner bases. The algorithm determines that a polynomial is a member of the ideal if and only if the partially reduced remainder polynomial is a member of the algebraic ideal generated by the Gröbner bases.[40] The Rosenfeld–Gröbner algorithm facilitates creatingTaylor seriesexpansions of solutions to the differential equations.[41] Example 1:(Mer⁡(f⁡(y),∂y)){\textstyle (\operatorname {Mer} (\operatorname {f} (y),\partial _{y}))}is the differentialmeromorphic functionfield with a singlestandard derivation. Example 2:(C{y},p(y)⋅∂y){\textstyle (\mathbb {C} \{y\},p(y)\cdot \partial _{y})}is a differential field with alinear differential operatoras the derivation, for any polynomialp(y){\displaystyle p(y)}. DefineEa(p(y))=p(y+a){\textstyle E^{a}(p(y))=p(y+a)}asshift operatorEa{\textstyle E^{a}}for polynomialp(y){\textstyle p(y)}. A shift-invariant operatorT{\textstyle T}commutes with the shift operator:Ea∘T=T∘Ea{\textstyle E^{a}\circ T=T\circ E^{a}}. ThePincherle derivative, a derivation of shift-invariant operatorT{\textstyle T}, isT′=T∘y−y∘T{\textstyle T^{\prime }=T\circ y-y\circ T}.[42] Ring of integers is(Z.δ){\displaystyle (\mathbb {Z} .\delta )}, and every integer is a constant. Field of rational numbers is(Q.δ){\displaystyle (\mathbb {Q} .\delta )}, and every rational number is a constant. Constants form thesubring of constants(C,∂y)⊂(C{y},∂y){\textstyle (\mathbb {C} ,\partial _{y})\subset (\mathbb {C} \{y\},\partial _{y})}.[43] Elementexp⁡(y){\textstyle \exp(y)}simply generates differential ideal[exp⁡(y)]{\textstyle [\exp(y)]}in the differential ring(C{y,exp⁡(y)},∂y){\textstyle (\mathbb {C} \{y,\exp(y)\},\partial _{y})}.[44] Any ring with identity is aZ-{\textstyle \operatorname {{\mathcal {Z}}-} }algebra.[45]Thus a differential ring is aZ-{\textstyle \operatorname {{\mathcal {Z}}-} }algebra. If ringR{\textstyle {\mathcal {R}}}is a subring of the center of unital ringM{\textstyle {\mathcal {M}}}, thenM{\textstyle {\mathcal {M}}}is anR-{\textstyle \operatorname {{\mathcal {R}}-} }algebra.[45]Thus, a differential ring is an algebra over its differential subring. This is thenatural structureof an algebra over its subring.[30] Ring(Q{y,z},∂y){\textstyle (\mathbb {Q} \{y,z\},\partial _{y})}has irreducible polynomials,p{\textstyle p}(normal, squarefree) andq{\textstyle q}(special, ideal generator). Ring(Q{y1,y2},δ){\textstyle (\mathbb {Q} \{y_{1},y_{2}\},\delta )}has derivativesδ(y1)=y1′{\textstyle \delta (y_{1})=y_{1}^{\prime }}andδ(y2)=y2′{\textstyle \delta (y_{2})=y_{2}^{\prime }} Theleading derivatives, andinitialsare: Symbolic integration uses algorithms involving polynomials and their derivatives such as Hermite reduction, Czichowski algorithm, Lazard-Rioboo-Trager algorithm, Horowitz-Ostrogradsky algorithm, squarefree factorization and splitting factorization to special and normal polynomials.[46] Differential algebra can determine if a set of differential polynomial equations has a solution. A total order ranking may identify algebraic constraints. An elimination ranking may determine if one or a selected group of independent variables can express the differential equations. Using triangular decomposition and elimination order, it may be possible to solve the differential equations one differential indeterminate at a time in a step-wise method. Another approach is to create a class of differential equations with a known solution form; matching a differential equation to its class identifies the equation's solution. Methods are available to facilitate the numerical integration of adifferential-algebraicsystem of equations.[47] In a study of non-linear dynamical systems withchaos, researchers used differential elimination to reduce differential equations to ordinary differential equations involving a single state variable. They were successful in most cases, and this facilitated developing approximate solutions, efficiently evaluating chaos, and constructingLyapunov functions.[48]Researchers have applied differential elimination to understandingcellular biology,compartmental biochemical models,parameterestimation andquasi-steady stateapproximation (QSSA) for biochemical reactions.[49][50]Using differential Gröbner bases, researchers have investigated non-classicalsymmetryproperties ofnon-linear differential equations.[51]Other applications include control theory, model theory, and algebraic geometry.[52][16][53]Differential algebra also applies to differential-difference equations.[54] AZ-graded{\textstyle \operatorname {\mathbb {Z} -graded} }vector spaceV∙{\textstyle V_{\bullet }}is a collection of vector spacesVm{\textstyle V_{m}}with integerdegree|v|=m{\textstyle |v|=m}forv∈Vm{\textstyle v\in V_{m}}. Adirect sumcan represent this graded vector space:[55] Adifferential graded vector spaceorchain complex, is a graded vector spaceV∙{\textstyle V_{\bullet }}with adifferential maporboundary mapdm:Vm→Vm−1{\textstyle d_{m}:V_{m}\to V_{m-1}}withdm∘dm+1=0{\displaystyle d_{m}\circ d_{m+1}=0}.[56] Acochain complexis a graded vector spaceV∙{\textstyle V^{\bullet }}with adifferential maporcoboundary mapdm:Vm→Vm+1{\textstyle d_{m}:V_{m}\to V_{m+1}}withdm+1∘dm=0{\displaystyle d_{m+1}\circ d_{m}=0}.[56] Adifferential graded algebrais a graded algebraA{\textstyle A}with a linear derivationd:A→A{\textstyle d:A\to A}withd∘d=0{\displaystyle d\circ d=0}that follows the graded Leibniz product rule.[57] ALie algebrais a finite-dimensional real or complex vector spaceg{\textstyle {\mathcal {g}}}with abilinearbracket operator[,]:g×g→g{\textstyle [,]:{\mathcal {g}}\times {\mathcal {g}}\to {\mathcal {g}}}withSkew symmetryand theJacobi identityproperty.[58] for allX,Y,Z∈g{\displaystyle X,Y,Z\in {\mathcal {g}}}. Theadjointoperator,adX⁡(Y)=[Y,X]{\textstyle \operatorname {ad_{X}} (Y)=[Y,X]}is aderivation of the bracketbecause the adjoint's effect on the binary bracket operation is analogous to the derivation's effect on the binary product operation. This is theinner derivationdetermined byX{\textstyle X}.[59][60] Theuniversal enveloping algebraU(g){\textstyle U({\mathcal {g}})}of Lie algebrag{\textstyle {\mathcal {g}}}is a maximal associative algebra with identity, generated by Lie algebra elementsg{\textstyle {\mathcal {g}}}and containing products defined by the bracket operation. Maximal means that a linear homomorphism maps the universal algebra to any other algebra that otherwise has these properties. The adjoint operator is a derivation following the Leibniz product rule.[61] for allX,Y,Z∈U(g){\displaystyle X,Y,Z\in U({\mathcal {g}})}. TheWeyl algebrais an algebraAn(K){\textstyle A_{n}(K)}over a ringK[p1,q1,…,pn,qn]{\textstyle K[p_{1},q_{1},\dots ,p_{n},q_{n}]}with a specific noncommutative product:[62] All other indeterminate products are commutative fori,j∈{1,…,n}{\textstyle i,j\in \{1,\dots ,n\}}: A Weyl algebra can represent the derivations for a commutative ring's polynomialsf∈K[y1,…,yn]{\textstyle f\in K[y_{1},\ldots ,y_{n}]}. The Weyl algebra's elements areendomorphisms, the elementsp1,…,pn{\textstyle p_{1},\ldots ,p_{n}}function as standard derivations, and map compositions generatelinear differential operators.D-moduleis a related approach for understanding differential operators. The endomorphisms are:[62] The associative, possibly noncommutative ringA{\textstyle A}has derivationd:A→A{\textstyle d:A\to A}.[63] Thepseudo-differential operatorringA((∂−1)){\textstyle A((\partial ^{-1}))}is a leftA-module{\textstyle \operatorname {A-module} }containing ring elementsL{\textstyle L}:[63][64][65] The derivative operator isd(a)=∂∘a−a∘∂{\textstyle d(a)=\partial \circ a-a\circ \partial }.[63] Thebinomial coefficientis(ik){\displaystyle {\Bigl (}{i \atop k}{\Bigr )}}. Pseudo-differential operator multiplication is:[63] TheRitt problemasks is there an algorithm that determines if one prime differential ideal contains a second prime differential ideal when characteristic sets identify both ideals.[66] TheKolchin catenary conjecturestates given ad>0{\textstyle d>0}dimensional irreducible differential algebraic varietyV{\textstyle V}and an arbitrary pointp∈V{\textstyle p\in V}, a long gap chain of irreducible differential algebraic subvarieties occurs fromp{\textstyle p}to V.[67] TheJacobi bound conjectureconcerns the upper bound for the order of an differential variety's irreducible component. The polynomial's orders determine a Jacobi number, and the conjecture is the Jacobi number determines this bound.[68]
https://en.wikipedia.org/wiki/Differential_algebra
TheInternational System of Units, internationally known by the abbreviationSI(from FrenchSystème international d'unités), is the modern form of themetric systemand the world's most widely usedsystem of measurement. It is the only system of measurement with official status in nearly every country in the world, employed in science, technology, industry, and everyday commerce. The SI system is coordinated by theInternational Bureau of Weights and Measures, which is abbreviated BIPM fromFrench:Bureau international des poids et mesures. The SI comprises acoherentsystem ofunits of measurementstarting with sevenbase units, which are thesecond(symbol s, the unit oftime),metre(m,length),kilogram(kg,mass),ampere(A,electric current),kelvin(K,thermodynamic temperature),mole(mol,amount of substance), andcandela(cd,luminous intensity). The system can accommodate coherent units for an unlimited number of additional quantities. These are called coherentderived units, which can always be represented as products of powers of the base units. Twenty-two coherent derived units have been provided with special names and symbols. The seven base units and the 22 coherent derived units with special names and symbols may be used in combination to express other coherent derived units. Since the sizes of coherent units will be convenient for only some applications and not for others, the SI provides twenty-fourprefixeswhich, when added to the name and symbol of a coherent unit produce twenty-four additional (non-coherent) SI units for the same quantity; these non-coherent units are always decimal (i.e. power-of-ten) multiples and sub-multiples of the coherent unit. The current way of defining the SI is a result of a decades-long move towards increasingly abstract and idealised formulation in which therealisationsof the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology. The original motivation for the development of the SI was the diversity of units that had sprung up within thecentimetre–gram–second(CGS) systems (specifically the inconsistency between the systems ofelectrostatic unitsandelectromagnetic units) and the lack of coordination between the variousdisciplinesthat used them. The General Conference on Weights and Measures (French:Conférence générale des poids et mesures– CGPM), which was established by theMetre Conventionof 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948, and is based on themetre–kilogram–second system of units(MKS) combined with ideas from the development of the CGS system. The International System of Units consists of a set of seven defining constants with seven corresponding base units, derived units, and a set of decimal-based multipliers that are used as prefixes.[1]: 125 The seven defining constants are the most fundamental feature of the definition of the system of units.[1]: 125The magnitudes of all SI units are defined by declaring that seven constants have certain exact numerical values when expressed in terms of their SI units. These defining constants are thespeed of lightin vacuumc, thehyperfine transition frequency of caesiumΔνCs, thePlanck constanth, theelementary chargee, theBoltzmann constantk, theAvogadro constantNA, and theluminous efficacyKcd. The nature of the defining constants ranges from fundamental constants of nature such ascto the purely technical constantKcd. The values assigned to these constants were fixed to ensure continuity with previous definitions of the base units.[1]: 128 The SI selects seven units to serve asbase units, corresponding to seven base physical quantities. They are thesecond, with the symbols, which is the SI unit of the physical quantity oftime; themetre, symbolm, the SI unit oflength;kilogram(kg, the unit ofmass);ampere(A,electric current);kelvin(K,thermodynamic temperature);mole(mol,amount of substance); andcandela(cd,luminous intensity).[1]The base units are defined in terms of the defining constants. For example, the kilogram is defined by taking the Planck constanthto be6.62607015×10−34J⋅s, giving the expression in terms of the defining constants[1]: 131 All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units. The choice of which and even how many quantities to use as base quantities is not fundamental or even unique – it is a matter of convention.[1]:126 The system allows for an unlimited number of additional units, calledderived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called acoherentderived unit. For example, the coherent derived SI unit ofvelocityis themetre per second, with the symbolm/s.[1]: 139The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units). A useful property of a coherent system is that when the numerical values of physical quantities are expressed in terms of the units of the system, then the equations between the numerical values have exactly the same form, including numerical factors, as the corresponding equations between the physical quantities.[3]: 6 Twenty-two coherent derived units have been provided with special names and symbols as shown in the table below. The radian and steradian have no base units but are treated as derived units for historical reasons.[1]: 137 The derived units in the SI are formed by powers, products, or quotients of the base units and are unlimited in number.[1]: 138[4]: 14, 16 Derived unitsapply to somederived quantities, which may by definition be expressed in terms ofbase quantities, and thus are not independent; for example,electrical conductanceis the inverse ofelectrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other.[b]Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI, such as acceleration, which has the SI unit m/s2.[1]: 139 A combination of base and derived units may be used to express a derived unit. For example, the SI unit offorceis thenewton(N), the SI unit ofpressureis thepascal(Pa) – and the pascal can be defined as one newton per square metre (N/m2).[5] Like all metric systems, the SI usesmetric prefixesto systematically construct, for the same physical quantity, a set of units that are decimal multiples of each other over a wide range. For example, driving distances are normally given inkilometres(symbolkm) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus,1 km=1000 m. The SI provides twenty-four metric prefixes that signify decimal powers ranging from 10−30to 1030, the most recent being adopted in 2022.[1]: 143–144[6][7][8]Most prefixes correspond to integer powers of 1000; the only ones that do not are those for 10, 1/10, 100, and 1/100. The conversion between different SI units for one and the same physical quantity is always through a power of ten. This is why the SI (and metric systems more generally) are calleddecimal systems of measurement units.[9] The grouping formed by a prefix symbol attached to a unit symbol (e.g. 'km', 'cm') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power. It can also be combined with other unit symbols to formcompound unitsymbols.[1]: 143For example,g/cm3is an SI unit ofdensity, wherecm3is to be interpreted as (cm)3. Prefixes are added to unit names to produce multiples andsubmultiplesof the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example,kilo-denotes a multiple of a thousand andmilli-denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is amicrometre, not amillimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is amilligram, not amicrokilogram.[10]: 122[11]: 14 The BIPM specifies 24 prefixes for the International System of Units (SI): The base units and the derived units formed as the product of powers of the base units with a numerical factor of one form acoherent system of units. Every physical quantity has exactly one coherent SI unit. For example,1 m/s = (1 m) / (1 s)is the coherent derived unit for velocity.[1]: 139With the exception of the kilogram (for which the prefix kilo- is required for a coherent unit), when prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one.[1]: 137For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is acoherentSI unit. The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.[1]: 138 The kilogram is the only coherent SI unit whose name and symbol include a prefix. For historical reasons, the names and symbols for multiples and sub-multiples of the unit of mass are formed as if thegramwere the base unit. Prefix names and symbols are attached to the unit namegramand the unit symbol g respectively. For example,10−6kgis writtenmilligramandmg, notmicrokilogramandμkg.[1]: 144 Several different quantities may share the same coherent SI unit. For example, the joule per kelvin (symbolJ/K) is the coherent SI unit for two distinct quantities:heat capacityandentropy; another example is the ampere, which is the coherent SI unit for bothelectric currentandmagnetomotive force. This illustrates why it is important not to use the unit alone to specify the quantity. As theSI Brochurestates,[1]: 140"this applies not only to technical texts, but also, for example, to measuring instruments (i.e. the instrument read-out needs to indicate both the unit and the quantity measured)". Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is a base unit when it is a unit of electric current, but a coherent derived unit when it is a unit of magnetomotive force.[1]: 140 According to the SI Brochure,[1]: 148unit names should be treated ascommon nounsof the context language. This means that they should be typeset in the same character set as other common nouns (e.g.Latin alphabetin English,Cyrillic scriptin Russian, etc.), following the usual grammatical andorthographicalrules of the context language. For example, in English and French, even when the unit is named after a person and its symbol begins with a capital letter, the unit name in running text should start with a lowercase letter (e.g., newton, hertz, pascal) and iscapitalisedonly at the beginning of a sentence and inheadings and publication titles. As a nontrivial application of this rule, the SI Brochure notes[1]: 148that the name of the unit with the symbol°Cis correctly spelled as 'degreeCelsius': the first letter of the name of the unit, 'd', is in lowercase, while the modifier 'Celsius' is capitalised because it is a proper name.[1]: 148 The English spelling and even names for certain SI units, prefixes and non-SI units depend on the variety of English used.US Englishuses the spellingdeka-,meter, andliter, andInternational Englishusesdeca-,metre, andlitre. The name of the unit whose symbol is t and which is defined by1 t=103kgis 'metric ton' in US English and 'tonne' in International English.[4]: iii Symbols of SI units are intended to be unique and universal, independent of the context language.[10]: 130–135The SI Brochure has specific rules for writing them.[10]: 130–135 In addition, the SI Brochure provides style conventions for among other aspects of displaying quantities units: the quantity symbols, formatting of numbers and the decimal marker, expressing measurement uncertainty, multiplication and division of quantity symbols, and the use of pure numbers and various angles.[1]: 147 In the United States, the guideline produced by theNational Institute of Standards and Technology(NIST)[11]: 37clarifies language-specific details for American English that were left unclear by the SI Brochure, but is otherwise identical to the SI Brochure.[14]For example, since 1979, thelitremay exceptionally be written using either an uppercase "L" or a lowercase "l", a decision prompted by the similarity of the lowercase letter "l" to the numeral "1", especially with certain typefaces or English-style handwriting. NIST recommends that within the United States, "L" be used rather than "l".[11] Metrologists carefully distinguish between the definition of a unit and its realisation. The SI units are defined by declaring that sevendefining constants[1]: 125–129have certain exact numerical values when expressed in terms of their SI units. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit.[1]: 135 For each base unit the BIPM publishes amises en pratique, (Frenchfor 'putting into practice; implementation',[16]) describing the current best practical realisations of the unit.[17]The separation of the defining constants from the definitions of units means that improved measurements can be developed leading to changes in themises en pratiqueas science and technology develop, without having to revise the definitions. The publishedmise en pratiqueis not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit".[10]: 111Various consultative committees of theCIPMdecided in 2016 that more than onemise en pratiquewould be developed for determining the value of each unit.[18]These methods include the following: The International System of Units, or SI,[1]:123is adecimalandmetricsystem of unitsestablished in 1960 and periodically updated since then. The SI has anofficial statusin most countries, includingthe United States,Canada, andthe United Kingdom, although these three countries are among the handful of nations that, to various degrees, also continue to use their customary systems. Nevertheless, with this nearly universal level of acceptance, the SI "has been used around the world as the preferred system of units, the basic language for science, technology, industry, and trade."[1]: 123, 126 The only other types of measurement system that still have widespread use across the world are theimperial and US customary measurement systems. Theinternational yard and poundare defined in terms of the SI.[22] The quantities and equations that provide the context in which the SI units are defined are now referred to as theInternational System of Quantities(ISQ). The ISQ is based on thequantitiesunderlying each of theseven base units of the SI. Other quantities, such asarea,pressure, andelectrical resistance, are derived from these base quantities by clear, non-contradictory equations. The ISQ defines the quantities that are measured with the SI units.[23]The ISQ is formalised, in part, in the international standardISO/IEC 80000, which was completed in 2009 with the publication ofISO 80000-1,[24]and has largely been revised in 2019–2020.[25] The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of theMetre Convention. They are theGeneral Conference on Weights and Measures(CGPM[c]),[26]the International Committee for Weights and Measures (CIPM[d]), and theInternational Bureau of Weights and Measures(BIPM[e]).All the decisions and recommendations concerning units are collected in a brochure calledThe International System of Units (SI),[1]which is published in French and English by the BIPM and periodically updated. The writing and maintenance of the brochure is carried out by one of the committees of the CIPM. The definitions of the terms "quantity", "unit", "dimension", etc. that are used in theSI Brochureare those given in theinternational vocabulary of metrology.[27]The brochure leaves some scope for local variations, particularly regarding unit names and terms in different languages. For example, the United States'National Institute of Standards and Technology(NIST) has produced a version of the CGPM document (NIST SP 330), which clarifies usage for English-language publications that useAmerican English.[4] The concept of a system of units emerged a hundred years before the SI. In the 1860s,James Clerk Maxwell,William Thomson(later Lord Kelvin), and others working under the auspices of theBritish Association for the Advancement of Science, building on previous work ofCarl Gauss, developed thecentimetre–gram–second system of unitsor cgs system in 1874. The systems formalised the concept of a collection of related units called acoherentsystem of units. In a coherent system,base unitscombine to definederived unitswithout extra factors.[4]: 2For example, using metre per second is coherent in a system that uses metre for length and second for time, but kilometre per hour is not coherent. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including theergforenergy, thedyneforforce, thebaryeforpressure, thepoisefordynamic viscosityand thestokesforkinematic viscosity.[29] A French-inspired initiative for international cooperation inmetrologyled to the signing in 1875 of theMetre Convention, also called Treaty of the Metre, by 17 nations.[f][30]: 353–354TheGeneral Conference on Weights and Measures(French:Conférence générale des poids et mesures– CGPM), which was established by the Metre Convention,[29]brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements.[31]: 37[32]Initially the convention only covered standards for the metre and the kilogram. This became the foundation of the MKS system of units.[4]: 2 At the close of the 19th century three different systems of units of measure existed for electrical measurements: aCGS-based system for electrostatic units, also known as the Gaussian or ESU system, aCGS-based system for electromechanical units(EMU), and an International system based on units defined by the Metre Convention[33]for electrical distribution systems. Attempts to resolve the electrical units in terms of length, mass, and time usingdimensional analysiswas beset with difficulties – the dimensions depended on whether one used the ESU or EMU systems.[34]This anomaly was resolved in 1901 whenGiovanni Giorgipublished a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to beelectric current,voltage, orelectrical resistance.[35] Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics. When combined with the MKS the new system, known as MKSA, was approved in 1946.[4] In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention".[36]This working document wasPractical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived six base units: the metre, kilogram, second, ampere, degree Kelvin, and candela. The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down.[37]These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.[10]: 104, 130 The 10th CGPM in 1954 resolved to create an international system of units[31]: 41and in 1960, the 11th CGPM adopted theInternational System of Units, abbreviated SI from the French nameLe Système international d'unités, which included a specification for units of measurement.[10]: 110 TheInternational Bureau of Weights and Measures(BIPM) has described SI as "the modern form of metric system".[10]: 95In 1971 themolebecame the seventh base unit of the SI.[4]: 2 After themetre was redefinedin 1960, theInternational Prototype of the Kilogram(IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK.[38]During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. Duringextraordinary verificationscarried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.[39]By avoiding the use of an artefact to define units, all issues with the loss, damage, and change of the artefact are avoided.[1]: 125 A proposal was made that:[40] The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019.[41]The change was adopted by the European Union through Directive (EU) 2019/1258.[42] Prior to its redefinition in 2019, the SI was defined through the seven base units from which the derived units were constructed as products of powers of the base units. After the redefinition, the SI is defined by fixing the numerical values of seven defining constants. This has the effect that the distinction between the base units and derived units is, in principle, not needed, since all units, base as well as derived, may be constructed directly from the defining constants. Nevertheless, the distinction is retained because "it is useful and historically well established", and also because theISO/IEC 80000series of standards, which define theInternational System of Quantities(ISQ), specifies base and derived quantities that necessarily have the corresponding SI units.[1]: 129 Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI,[10]including the hour, minute, degree of angle, litre, and decibel. This is a list of units that are not defined as part of theInternational System of Units(SI) but are otherwise mentioned in the SI Brochure,[43]listed as being accepted for use alongside SI units, or for explanatory purposes. The SI prefixes can be used with several of these units, but not, for example, with the non-SI units of time. Others, in order to be converted to the corresponding SI unit, require conversion factors that are not powers of ten. Some common examples of such units are the customary units of time, namely the minute (conversion factor of60 s/min, since1 min=60 s), the hour (3600 s), and the day (86400s); the degree (for measuring plane angles,1°=(π /180) rad);and theelectronvolt(a unit of energy,1 eV=1.602176634×10−19J).[43] Although the termmetric systemis often used as an informal alternative name for the International System of Units,[46]other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individualmetric unitssuch as thesverdrupand thedarcythat exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI. Sometimes, SI unit name variations are introduced, mixing information about the corresponding physical quantity or the conditions of its measurement; however, this practice is unacceptable with the SI. "Unacceptability of mixing information with units: When one gives the value of a quantity, any information concerning the quantity or its conditions of measurement must be presented in such a way as not to be associated with the unit."[10]Instances include: "watt-peak" and "watt RMS"; "geopotential metre" and "vertical metre"; "standard cubic metre"; "atomic second", "ephemeris second", and "sidereal second". Organisations Standards and conventions [1]This article incorporatestextfrom this source, which is available under theCC BY 3.0license.
https://en.wikipedia.org/wiki/International_System_of_Units
Java Cardis a software technology that allowsJava-based applications (applets) to be run securely onsmart cardsand more generally on similar secure smallmemory footprintdevices[1]which are called "secure elements" (SE). Today, a secure element is not limited to its smart cards and other removable cryptographic tokens form factors; embedded SEs soldered onto a device board and new security designs embedded into general purpose chips are also widely used. Java Card addresses this hardware fragmentation and specificities while retaining code portability brought forward by Java. Java Card is the tiniest of Java platforms targeted for embedded devices. Java Card gives the user the ability to program the devices and make them application specific. It is widely used in different markets: wireless telecommunications within SIM cards and embedded SIM, payment within banking cards[2]and NFC mobile payment and for identity cards, healthcare cards, and passports. Several IoT products like gateways are also using Java Card based products to secure communications with a cloud service for instance. The first Java Card was introduced in 1996 bySchlumberger's card division which later merged withGemplusto formGemalto. Java Card products are based on the specifications bySun Microsystems(later asubsidiaryofOracle Corporation). Many Java card products also rely on the GlobalPlatform specifications for the secure management of applications on the card (download, installation, personalization, deletion). The main design goals of the Java Card technology are portability, security and backward compatibility.[3] Java Card aims at defining a standardsmart cardcomputing environment allowing the same Java Card applet to run on different smart cards, much like a Java applet runs on different computers. As in Java, this is accomplished using the combination of a virtual machine (the Java Card Virtual Machine), and a well-defined runtime library, which largely abstracts the applet from differences between smart cards. Portability remains mitigated by issues of memory size, performance, and runtime support (e.g. for communication protocols or cryptographic algorithms). Moreover, vendors often expose proprietaryAPIsspecific to their ecosystem, further limiting portability for applets that rely on such calls. To address these limitations,Vasilios MavroudisandPetr Svendaintroduced JCMathLib, an open-source cryptographic wrapper library for Java Card, enabling low-level cryptographic computations not supported by the standard API.[4][5][6] Java Card technology was originally developed for the purpose of securing sensitive information stored onsmart cards. Security is determined by various aspects of this technology: At the language level, Java Card is a precise subset of Java: all language constructs of Java Card exist in Java and behave identically. This goes to the point that as part of a standard build cycle, a Java Card program is compiled into a Java class file by a Java compiler; the class file is post-processed by tools specific to the Java Card platform. However, many Java language features are not supported by Java Card (in particular types char, double, float and long; thetransientqualifier; enums; arrays of more than one dimension; finalization; object cloning; threads). Further, some common features of Java are not provided at runtime by many actual smart cards (in particular typeint, which is the default type of a Java expression; and garbage collection of objects). Java Card bytecode run by the Java Card Virtual Machine is a functional subset ofJava 2 bytecoderun by a standard Java Virtual Machine but with a different encoding to optimize for size. A Java Card applet thus typically uses less bytecode than the hypothetical Java applet obtained by compiling the same Java source code. This conserves memory, a necessity in resource constrained devices like smart cards. As a design tradeoff, there is no support for some Java language features (as mentioned above), and size limitations. Techniques exist for overcoming the size limitations, such as dividing the application's code into packages below the 64KiBlimit. Standard Java Card class library and runtime support differs a lot from that in Java, and the common subset is minimal. For example, the Java Security Manager class is not supported in Java Card, where security policies are implemented by the Java Card Virtual Machine; and transients (non-persistent, fast RAM variables that can be class members) are supported via a Java Card class library, while they have native language support in Java. The Java Card runtime and virtual machine also support features that are specific to the Java Card platform: Coding techniques used in a practical Java Card program differ significantly from those used in a Java program. Still, that Java Card uses a precise subset of the Java language speeds up the learning curve, and enables using a Java environment to develop and debug a Java Card program (caveat: even if debugging occurs with Java bytecode, make sure that the class file fits the limitation of Java Card language by converting it to Java Card bytecode; and test in a real Java Card smart card early on to get an idea of the performance); further, one can run and debug both the Java Card code for the application to be embedded in a smart card, and a Java application that will be in the host using the smart card, all working jointly in the same environment. Oracle has released several Java Card platform specifications and is providing SDK tools for application development. Usually smart card vendors implement just a subset of algorithms specified in Java Card platform target and the only way to discover what subset of specification is implemented is to test the card.[7] The version 3.0 of the Java Card specification (draft released in March 2008) is separated in two editions: theClassic Editionand theConnected Edition.[10] Java Card 3.1 was released in January 2019.
https://en.wikipedia.org/wiki/Java_Card
In computer security, achain of trustis established by validating each component ofhardwareandsoftwarefrom the end entity up to the root certificate. It is intended to ensure that only trusted software and hardware can be used while still retaining flexibility. A chain of trust is designed to allow multiple users to create and use the software on the system, which would be more difficult if all the keys were stored directly in hardware. It starts with hardware that will only boot from software that isdigitally signed. The signing authority will only sign boot programs that enforce security, such as only running programs that are themselves signed, or only allowing signed code to have access to certain features of the machine. This process may continue for several layers. This process results in a chain of trust. The final software can be trusted to have certain properties because if it had been illegally modified its signature would be invalid, and the previous software would not have executed it. The previous software can be trusted, because it, in turn, would not have been loaded if its signature had been invalid. The trustworthiness of each layer is guaranteed by the one before, back to thetrust anchor. It would be possible to have the hardware check the suitability (signature) for every single piece of software. However, this would not produce the flexibility that a "chain" provides. In a chain, any given link can be replaced with a different version to provide different properties, without having to go all the way back to the trust anchor. This use of multiple layers is an application of a general technique to improve scalability and is analogous to the use of multiple certificates in acertificate chain. In computer security, digital certificates are verified using a chain of trust.[1]The trust anchor for the digital certificate is the rootcertificate authority(CA). The certificate hierarchy is a structure of certificates that allows individuals to verify the validity of a certificate's issuer. Certificates are issued and signed by certificates that reside higher in the certificate hierarchy, so the validity and trustworthiness of a given certificate is determined by the corresponding validity of the certificate that signed it. The chain of trust of a certificate chain is an ordered list of certificates, containing an end-user subscriber certificate andintermediate certificates(that represents the intermediate CA), that enables the receiver to verify that the sender and all intermediate certificates are trustworthy. This process is best described in the pageIntermediate certificate authority. See alsoX.509 certificate chainsfor a description of these concepts in a widely used standard for digital certificates.
https://en.wikipedia.org/wiki/Chain_of_trust
Inalgebraic geometry, aprojective varietyis analgebraic varietythat is a closedsubvarietyof aprojective space. That is, it is the zero-locus inPn{\displaystyle \mathbb {P} ^{n}}of some finite family ofhomogeneous polynomialsthat generate aprime ideal, the defining ideal of the variety. A projective variety is aprojective curveif its dimension is one; it is aprojective surfaceif its dimension is two; it is aprojective hypersurfaceif its dimension is one less than the dimension of the containing projective space; in this case it is the set of zeros of a singlehomogeneous polynomial. IfXis a projective variety defined by a homogeneous prime idealI, then thequotient ring is called thehomogeneous coordinate ringofX. Basic invariants ofXsuch as thedegreeand thedimensioncan be read off theHilbert polynomialof thisgraded ring. Projective varieties arise in many ways. They arecomplete, which roughly can be expressed by saying that there are no points "missing". The converse is not true in general, butChow's lemmadescribes the close relation of these two notions. Showing that a variety is projective is done by studyingline bundlesordivisorsonX. A salient feature of projective varieties are the finiteness constraints on sheaf cohomology. For smooth projective varieties,Serre dualitycan be viewed as an analog ofPoincaré duality. It also leads to theRiemann–Roch theoremfor projective curves, i.e., projective varieties ofdimension1. The theory of projective curves is particularly rich, including a classification by thegenusof the curve. The classification program for higher-dimensional projective varieties naturally leads to the construction of moduli of projective varieties.[1]Hilbert schemesparametrize closed subschemes ofPn{\displaystyle \mathbb {P} ^{n}}with prescribed Hilbert polynomial. Hilbert schemes, of whichGrassmanniansare special cases, are also projective schemes in their own right.Geometric invariant theoryoffers another approach. The classical approaches include theTeichmüller spaceandChow varieties. A particularly rich theory, reaching back to the classics, is available for complex projective varieties, i.e., when the polynomials definingXhavecomplexcoefficients. Broadly, theGAGA principlesays that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. For example, the theory ofholomorphic vector bundles(more generallycoherent analytic sheaves) onXcoincide with that of algebraic vector bundles.Chow's theoremsays that a subset of projective space is the zero-locus of a family of holomorphic functions if and only if it is the zero-locus of homogeneous polynomials. The combination of analytic and algebraic methods for complex projective varieties lead to areas such asHodge theory. Letkbe an algebraically closed field. The basis of the definition of projective varieties is projective spacePn{\displaystyle \mathbb {P} ^{n}}, which can be defined in different, but equivalent ways: Aprojective varietyis, by definition, a closed subvariety ofPn{\displaystyle \mathbb {P} ^{n}}, where closed refers to theZariski topology.[2]In general, closed subsets of the Zariski topology are defined to be the common zero-locus of a finite collection of homogeneous polynomial functions. Given a polynomialf∈k[x0,…,xn]{\displaystyle f\in k[x_{0},\dots ,x_{n}]}, the condition does not make sense for arbitrary polynomials, but only iffishomogeneous, i.e., the degrees of all themonomials(whose sum isf) are the same. In this case, the vanishing of is independent of the choice ofλ≠0{\displaystyle \lambda \neq 0}. Therefore, projective varieties arise from homogeneousprime idealsIofk[x0,…,xn]{\displaystyle k[x_{0},\dots ,x_{n}]}, and setting Moreover, the projective varietyXis an algebraic variety, meaning that it is covered by open affine subvarieties and satisfies the separation axiom. Thus, the local study ofX(e.g., singularity) reduces to that of an affine variety. The explicit structure is as follows. The projective spacePn{\displaystyle \mathbb {P} ^{n}}is covered by the standard open affine charts which themselves are affinen-spaces with the coordinate ring Sayi= 0 for the notational simplicity and drop the superscript (0). ThenX∩U0{\displaystyle X\cap U_{0}}is a closed subvariety ofU0≃An{\displaystyle U_{0}\simeq \mathbb {A} ^{n}}defined by the ideal ofk[y1,…,yn]{\displaystyle k[y_{1},\dots ,y_{n}]}generated by for allfinI. Thus,Xis an algebraic variety covered by (n+1) open affine chartsX∩Ui{\displaystyle X\cap U_{i}}. Note thatXis the closure of the affine varietyX∩U0{\displaystyle X\cap U_{0}}inPn{\displaystyle \mathbb {P} ^{n}}. Conversely, starting from some closed (affine) varietyV⊂U0≃An{\displaystyle V\subset U_{0}\simeq \mathbb {A} ^{n}}, the closure ofVinPn{\displaystyle \mathbb {P} ^{n}}is the projective variety called theprojective completionofV. IfI⊂k[y1,…,yn]{\displaystyle I\subset k[y_{1},\dots ,y_{n}]}definesV, then the defining ideal of this closure is the homogeneous ideal[3]ofk[x0,…,xn]{\displaystyle k[x_{0},\dots ,x_{n}]}generated by for allfinI. For example, ifVis an affine curve given by, say,y2=x3+ax+b{\displaystyle y^{2}=x^{3}+ax+b}in the affine plane, then its projective completion in the projective plane is given byy2z=x3+axz2+bz3.{\displaystyle y^{2}z=x^{3}+axz^{2}+bz^{3}.} For various applications, it is necessary to consider more general algebro-geometric objects than projective varieties, namely projective schemes. The first step towards projective schemes is to endow projective space with a scheme structure, in a way refining the above description of projective space as an algebraic variety, i.e.,Pn(k){\displaystyle \mathbb {P} ^{n}(k)}is a scheme which it is a union of (n+ 1) copies of the affinen-spacekn. More generally,[4]projective space over a ringAis the union of theaffine schemes in such a way the variables match up as expected. The set ofclosed pointsofPkn{\displaystyle \mathbb {P} _{k}^{n}}, for algebraically closed fieldsk, is then the projective spacePn(k){\displaystyle \mathbb {P} ^{n}(k)}in the usual sense. An equivalent but streamlined construction is given by theProj construction, which is an analog of thespectrum of a ring, denoted "Spec", which defines an affine scheme.[5]For example, ifAis a ring, then IfRis aquotientofk[x0,…,xn]{\displaystyle k[x_{0},\ldots ,x_{n}]}by a homogeneous idealI, then the canonical surjection induces theclosed immersion Compared to projective varieties, the condition that the idealIbe a prime ideal was dropped. This leads to a much more flexible notion: on the one hand thetopological spaceX=Proj⁡R{\displaystyle X=\operatorname {Proj} R}may have multipleirreducible components. Moreover, there may benilpotentfunctions onX. Closed subschemes ofPkn{\displaystyle \mathbb {P} _{k}^{n}}correspond bijectively to the homogeneous idealsIofk[x0,…,xn]{\displaystyle k[x_{0},\ldots ,x_{n}]}that aresaturated; i.e.,I:(x0,…,xn)=I.{\displaystyle I:(x_{0},\dots ,x_{n})=I.}[6]This fact may be considered as a refined version ofprojective Nullstellensatz. We can give a coordinate-free analog of the above. Namely, given a finite-dimensional vector spaceVoverk, we let wherek[V]=Sym⁡(V∗){\displaystyle k[V]=\operatorname {Sym} (V^{*})}is thesymmetric algebraofV∗{\displaystyle V^{*}}.[7]It is theprojectivizationofV; i.e., it parametrizes lines inV. There is a canonical surjective mapπ:V∖{0}→P(V){\displaystyle \pi :V\setminus \{0\}\to \mathbb {P} (V)}, which is defined using the chart described above.[8]One important use of the construction is this (cf.,§ Duality and linear system). A divisorDon a projective varietyXcorresponds to a line bundleL. One then set it is called thecomplete linear systemofD. Projective space over anyschemeScan be defined as afiber product of schemes IfO(1){\displaystyle {\mathcal {O}}(1)}is thetwisting sheaf of SerreonPZn{\displaystyle \mathbb {P} _{\mathbb {Z} }^{n}}, we letO(1){\displaystyle {\mathcal {O}}(1)}denote thepullbackofO(1){\displaystyle {\mathcal {O}}(1)}toPSn{\displaystyle \mathbb {P} _{S}^{n}}; that is,O(1)=g∗(O(1)){\displaystyle {\mathcal {O}}(1)=g^{*}({\mathcal {O}}(1))}for the canonical mapg:PSn→PZn.{\displaystyle g:\mathbb {P} _{S}^{n}\to \mathbb {P} _{\mathbb {Z} }^{n}.} A schemeX→Sis calledprojectiveoverSif it factors as a closed immersion followed by the projection toS. A line bundle (or invertible sheaf)L{\displaystyle {\mathcal {L}}}on a schemeXoverSis said to bevery ample relative toSif there is animmersion(i.e., an open immersion followed by a closed immersion) for somenso thatO(1){\displaystyle {\mathcal {O}}(1)}pullbacks toL{\displaystyle {\mathcal {L}}}. Then aS-schemeXis projective if and only if it isproperand there exists a very ample sheaf onXrelative toS. Indeed, ifXis proper, then an immersion corresponding to the very ample line bundle is necessarily closed. Conversely, ifXis projective, then the pullback ofO(1){\displaystyle {\mathcal {O}}(1)}under the closed immersion ofXinto a projective space is very ample. That "projective" implies "proper" is deeper: themain theorem of elimination theory. By definition, a variety iscomplete, if it isproperoverk. Thevaluative criterion of propernessexpresses the intuition that in a proper variety, there are no points "missing". There is a close relation between complete and projective varieties: on the one hand, projective space and therefore any projective variety is complete. The converse is not true in general. However: Some properties of a projective variety follow from completeness. For example, for any projective varietyXoverk.[10]This fact is an algebraic analogue ofLiouville's theorem(any holomorphic function on a connected compact complex manifold is constant). In fact, the similarity between complex analytic geometry and algebraic geometry on complex projective varieties goes much further than this, as is explained below. Quasi-projective varietiesare, by definition, those which are open subvarieties of projective varieties. This class of varieties includesaffine varieties. Affine varieties are almost never complete (or projective). In fact, a projective subvariety of an affine variety must have dimension zero. This is because only the constants are globallyregular functionson a projective variety. By definition, any homogeneous ideal in a polynomial ring yields a projective scheme (required to be prime ideal to give a variety). In this sense, examples of projective varieties abound. The following list mentions various classes of projective varieties which are noteworthy since they have been studied particularly intensely. The important class of complex projective varieties, i.e., the casek=C{\displaystyle k=\mathbb {C} }, is discussed further below. The product of two projective spaces is projective. In fact, there is the explicit immersion (calledSegre embedding) As a consequence, theproductof projective varieties overkis again projective. ThePlücker embeddingexhibits aGrassmannianas a projective variety.Flag varietiessuch as the quotient of thegeneral linear groupGLn(k){\displaystyle \mathrm {GL} _{n}(k)}modulo the subgroup of uppertriangular matrices, are also projective, which is an important fact in the theory ofalgebraic groups.[11] As the prime idealPdefining a projective varietyXis homogeneous, thehomogeneous coordinate ring is agraded ring, i.e., can be expressed as thedirect sumof its graded components: There exists a polynomialPsuch thatdim⁡Rn=P(n){\displaystyle \dim R_{n}=P(n)}for all sufficiently largen; it is called theHilbert polynomialofX. It is a numerical invariant encoding some extrinsic geometry ofX. The degree ofPis thedimensionrofXand its leading coefficient timesr!is thedegreeof the varietyX. Thearithmetic genusofXis (−1)r(P(0) − 1) whenXis smooth. For example, the homogeneous coordinate ring ofPn{\displaystyle \mathbb {P} ^{n}}isk[x0,…,xn]{\displaystyle k[x_{0},\ldots ,x_{n}]}and its Hilbert polynomial isP(z)=(z+nn){\displaystyle P(z)={\binom {z+n}{n}}}; its arithmetic genus is zero. If the homogeneous coordinate ringRis anintegrally closed domain, then the projective varietyXis said to beprojectively normal. Note, unlikenormality, projective normality depends onR, the embedding ofXinto a projective space. The normalization of a projective variety is projective; in fact, it's the Proj of the integral closure of some homogeneous coordinate ring ofX. LetX⊂PN{\displaystyle X\subset \mathbb {P} ^{N}}be a projective variety. There are at least two equivalent ways to define the degree ofXrelative to its embedding. The first way is to define it as the cardinality of the finite set wheredis the dimension ofXandHi's are hyperplanes in "general positions". This definition corresponds to an intuitive idea of a degree. Indeed, ifXis a hypersurface, then the degree ofXis the degree of the homogeneous polynomial definingX. The "general positions" can be made precise, for example, byintersection theory; one requires that the intersection isproperand that the multiplicities of irreducible components are all one. The other definition, which is mentioned in the previous section, is that the degree ofXis the leading coefficient of theHilbert polynomialofXtimes (dimX)!. Geometrically, this definition means that the degree ofXis the multiplicity of the vertex of the affine cone overX.[12] LetV1,…,Vr⊂PN{\displaystyle V_{1},\dots ,V_{r}\subset \mathbb {P} ^{N}}be closed subschemes of pure dimensions that intersect properly (they are in general position). Ifmidenotes the multiplicity of an irreducible componentZiin the intersection (i.e.,intersection multiplicity), then the generalization ofBézout's theoremsays:[13] The intersection multiplicitymican be defined as the coefficient ofZiin the intersection productV1⋅⋯⋅Vr{\displaystyle V_{1}\cdot \cdots \cdot V_{r}}in theChow ringofPN{\displaystyle \mathbb {P} ^{N}}. In particular, ifH⊂PN{\displaystyle H\subset \mathbb {P} ^{N}}is a hypersurface not containingX, then whereZiare the irreducible components of thescheme-theoretic intersectionofXandHwith multiplicity (length of the local ring)mi. A complex projective variety can be viewed as acompact complex manifold; the degree of the variety (relative to the embedding) is then the volume of the variety as a manifold with respect to the metric inherited from the ambientcomplex projective space. A complex projective variety can be characterized as a minimizer of the volume (in a sense). LetXbe a projective variety andLa line bundle on it. Then the graded ring is called thering of sectionsofL. IfLisample, then Proj of this ring isX. Moreover, ifXis normal andLis very ample, thenR(X,L){\displaystyle R(X,L)}is the integral closure of the homogeneous coordinate ring ofXdetermined byL; i.e.,X↪PN{\displaystyle X\hookrightarrow \mathbb {P} ^{N}}so thatOPN(1){\displaystyle {\mathcal {O}}_{\mathbb {P} ^{N}}(1)}pulls-back toL.[14] For applications, it is useful to allow fordivisors(orQ{\displaystyle \mathbb {Q} }-divisors) not just line bundles; assumingXis normal, the resulting ring is then called a generalized ring of sections. IfKX{\displaystyle K_{X}}is acanonical divisoronX, then the generalized ring of sections is called thecanonical ringofX. If the canonical ring is finitely generated, then Proj of the ring is called thecanonical modelofX. The canonical ring or model can then be used to define theKodaira dimensionofX. Projective schemes of dimension one are calledprojective curves. Much of the theory of projective curves is about smooth projective curves, since thesingularitiesof curves can be resolved bynormalization, which consists in taking locally theintegral closureof the ring of regular functions. Smooth projective curves are isomorphic if and only if theirfunction fieldsare isomorphic. The study of finite extensions of or equivalently smooth projective curves overFp{\displaystyle \mathbb {F} _{p}}is an important branch inalgebraic number theory.[15] A smooth projective curve of genus one is called anelliptic curve. As a consequence of theRiemann–Roch theorem, such a curve can be embedded as a closed subvariety inP2{\displaystyle \mathbb {P} ^{2}}. In general, any (smooth) projective curve can be embedded inP3{\displaystyle \mathbb {P} ^{3}}(for a proof, seeSecant variety#Examples). Conversely, any smooth closed curve inP2{\displaystyle \mathbb {P} ^{2}}of degree three has genus one by thegenus formulaand is thus an elliptic curve. A smooth complete curve of genus greater than or equal to two is called ahyperelliptic curveif there is a finite morphismC→P1{\displaystyle C\to \mathbb {P} ^{1}}of degree two.[16] Every irreducible closed subset ofPn{\displaystyle \mathbb {P} ^{n}}of codimension one is ahypersurface; i.e., the zero set of some homogeneous irreducible polynomial.[17] Another important invariant of a projective varietyXis thePicard groupPic⁡(X){\displaystyle \operatorname {Pic} (X)}ofX, the set of isomorphism classes of line bundles onX. It is isomorphic toH1(X,OX∗){\displaystyle H^{1}(X,{\mathcal {O}}_{X}^{*})}and therefore an intrinsic notion (independent of embedding). For example, the Picard group ofPn{\displaystyle \mathbb {P} ^{n}}is isomorphic toZ{\displaystyle \mathbb {Z} }via the degree map. The kernel ofdeg:Pic⁡(X)→Z{\displaystyle \deg :\operatorname {Pic} (X)\to \mathbb {Z} }is not only an abstract abelian group, but there is a variety called theJacobian varietyofX, Jac(X), whose points equal this group. The Jacobian of a (smooth) curve plays an important role in the study of the curve. For example, the Jacobian of an elliptic curveEisEitself. For a curveXof genusg, Jac(X) has dimensiong. Varieties, such as the Jacobian variety, which are complete and have a group structure are known asabelian varieties, in honor ofNiels Abel. In marked contrast toaffine algebraic groupssuch asGLn(k){\displaystyle GL_{n}(k)}, such groups are always commutative, whence the name. Moreover, they admit an ampleline bundleand are thus projective. On the other hand, anabelian schememay not be projective. Examples of abelian varieties are elliptic curves, Jacobian varieties andK3 surfaces. LetE⊂Pn{\displaystyle E\subset \mathbb {P} ^{n}}be a linear subspace; i.e.,E={s0=s1=⋯=sr=0}{\displaystyle E=\{s_{0}=s_{1}=\cdots =s_{r}=0\}}for some linearly independent linear functionalssi. Then theprojection fromEis the (well-defined) morphism The geometric description of this map is as follows:[18] Projections can be used to cut down the dimension in which a projective variety is embedded, up tofinite morphisms. Start with some projective varietyX⊂Pn.{\displaystyle X\subset \mathbb {P} ^{n}.}Ifn>dim⁡X,{\displaystyle n>\dim X,}the projection from a point not onXgivesϕ:X→Pn−1.{\displaystyle \phi :X\to \mathbb {P} ^{n-1}.}Moreover,ϕ{\displaystyle \phi }is a finite map to its image. Thus, iterating the procedure, one sees there is a finite map This result is the projective analog ofNoether's normalization lemma. (In fact, it yields a geometric proof of the normalization lemma.) The same procedure can be used to show the following slightly more precise result: given a projective varietyXover a perfect field, there is a finite birational morphism fromXto a hypersurfaceHinPd+1.{\displaystyle \mathbb {P} ^{d+1}.}[20]In particular, ifXis normal, then it is the normalization ofH. While a projectiven-spacePn{\displaystyle \mathbb {P} ^{n}}parameterizes the lines in an affinen-space, thedualof it parametrizes the hyperplanes on the projective space, as follows. Fix a fieldk. ByP˘kn{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}, we mean a projectiven-space equipped with the construction: wheref:Spec⁡L→P˘kn{\displaystyle f:\operatorname {Spec} L\to {\breve {\mathbb {P} }}_{k}^{n}}is anL-pointofP˘kn{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}for a field extensionLofkandαi=f∗(ui)∈L.{\displaystyle \alpha _{i}=f^{*}(u_{i})\in L.} For eachL, the construction is a bijection between the set ofL-points ofP˘kn{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}and the set of hyperplanes onPLn{\displaystyle \mathbb {P} _{L}^{n}}. Because of this, the dual projective spaceP˘kn{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}is said to be themoduli spaceof hyperplanes onPkn{\displaystyle \mathbb {P} _{k}^{n}}. A line inP˘kn{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}is called apencil: it is a family of hyperplanes onPkn{\displaystyle \mathbb {P} _{k}^{n}}parametrized byPk1{\displaystyle \mathbb {P} _{k}^{1}}. IfVis a finite-dimensional vector space overk, then, for the same reason as above,P(V∗)=Proj⁡(Sym⁡(V)){\displaystyle \mathbb {P} (V^{*})=\operatorname {Proj} (\operatorname {Sym} (V))}is the space of hyperplanes onP(V){\displaystyle \mathbb {P} (V)}. An important case is whenVconsists of sections of a line bundle. Namely, letXbe an algebraic variety,La line bundle onXandV⊂Γ(X,L){\displaystyle V\subset \Gamma (X,L)}a vector subspace of finite positive dimension. Then there is a map:[21] determined by the linear systemV, whereB, called thebase locus, is theintersectionof the divisors of zero of nonzero sections inV(seeLinear system of divisors#A map determined by a linear systemfor the construction of the map). LetXbe a projective scheme over a field (or, more generally over a Noetherian ringA).Cohomology of coherent sheavesF{\displaystyle {\mathcal {F}}}onXsatisfies the following important theorems due to Serre: These results are proven reducing to the caseX=Pn{\displaystyle X=\mathbb {P} ^{n}}using the isomorphism where in the right-hand sideF{\displaystyle {\mathcal {F}}}is viewed as a sheaf on the projective space by extension by zero.[22]The result then follows by a direct computation forF=OPr(n),{\displaystyle {\mathcal {F}}={\mathcal {O}}_{\mathbb {P} ^{r}}(n),}nany integer, and for arbitraryF{\displaystyle {\mathcal {F}}}reduces to this case without much difficulty.[23] As a corollary to 1. above, iffis a projective morphism from a noetherian scheme to a noetherian ring, then the higher direct imageRpf∗F{\displaystyle R^{p}f_{*}{\mathcal {F}}}is coherent. The same result holds for proper morphismsf, as can be shown with the aid ofChow's lemma. Sheaf cohomologygroupsHion a noetherian topological space vanish foristrictly greater than the dimension of the space. Thus the quantity, called theEuler characteristicofF{\displaystyle {\mathcal {F}}}, is a well-defined integer (forXprojective). One can then showχ(F(n))=P(n){\displaystyle \chi ({\mathcal {F}}(n))=P(n)}for some polynomialPover rational numbers.[24]Applying this procedure to the structure sheafOX{\displaystyle {\mathcal {O}}_{X}}, one recovers the Hilbert polynomial ofX. In particular, ifXis irreducible and has dimensionr, the arithmetic genus ofXis given by which is manifestly intrinsic; i.e., independent of the embedding. The arithmetic genus of a hypersurface of degreedis(d−1n){\displaystyle {\binom {d-1}{n}}}inPn{\displaystyle \mathbb {P} ^{n}}. In particular, a smooth curve of degreedinP2{\displaystyle \mathbb {P} ^{2}}has arithmetic genus(d−1)(d−2)/2{\displaystyle (d-1)(d-2)/2}. This is thegenus formula. LetXbe a smooth projective variety where all of its irreducible components have dimensionn. In this situation, thecanonical sheafωX, defined as the sheaf ofKähler differentialsof top degree (i.e., algebraicn-forms), is a line bundle. Serre dualitystates that for any locally free sheafF{\displaystyle {\mathcal {F}}}onX, where the superscript prime refers to the dual space andF∨{\displaystyle {\mathcal {F}}^{\vee }}is the dual sheaf ofF{\displaystyle {\mathcal {F}}}. A generalization to projective, but not necessarily smooth schemes is known asVerdier duality. For a (smooth projective) curveX,H2and higher vanish for dimensional reason and the space of the global sections of the structure sheaf is one-dimensional. Thus the arithmetic genus ofXis the dimension ofH1(X,OX){\displaystyle H^{1}(X,{\mathcal {O}}_{X})}. By definition, thegeometric genusofXis the dimension ofH0(X,ωX). Serre duality thus implies that the arithmetic genus and the geometric genus coincide. They will simply be called the genus ofX. Serre duality is also a key ingredient in the proof of theRiemann–Roch theorem. SinceXis smooth, there is an isomorphism of groups from the group of(Weil) divisorsmodulo principal divisors to the group of isomorphism classes of line bundles. A divisor corresponding to ωXis called the canonical divisor and is denoted byK. Letl(D) be the dimension ofH0(X,O(D)){\displaystyle H^{0}(X,{\mathcal {O}}(D))}. Then the Riemann–Roch theorem states: ifgis a genus ofX, for any divisorDonX. By the Serre duality, this is the same as: which can be readily proved.[25]A generalization of the Riemann–Roch theorem to higher dimension is theHirzebruch–Riemann–Roch theorem, as well as the far-reachingGrothendieck–Riemann–Roch theorem. Hilbert schemesparametrize all closed subvarieties of a projective schemeXin the sense that the points (in the functorial sense) ofHcorrespond to the closed subschemes ofX. As such, the Hilbert scheme is an example of amoduli space, i.e., a geometric object whose points parametrize other geometric objects. More precisely, the Hilbert scheme parametrizes closed subvarieties whoseHilbert polynomialequals a prescribed polynomialP.[26]It is a deep theorem of Grothendieck that there is a scheme[27]HXP{\displaystyle H_{X}^{P}}overksuch that, for anyk-schemeT, there is a bijection The closed subscheme ofX×HXP{\displaystyle X\times H_{X}^{P}}that corresponds to the identity mapHXP→HXP{\displaystyle H_{X}^{P}\to H_{X}^{P}}is called theuniversal family. ForP(z)=(z+rr){\displaystyle P(z)={\binom {z+r}{r}}}, the Hilbert schemeHPnP{\displaystyle H_{\mathbb {P} ^{n}}^{P}}is called theGrassmannianofr-planes inPn{\displaystyle \mathbb {P} ^{n}}and, ifXis a projective scheme,HXP{\displaystyle H_{X}^{P}}is called theFano schemeofr-planes onX.[28] In this section, all algebraic varieties arecomplexalgebraic varieties. A key feature of the theory of complex projective varieties is the combination of algebraic and analytic methods. The transition between these theories is provided by the following link: since any complex polynomial is also a holomorphic function, any complex varietyXyields a complexanalytic space, denotedX(C){\displaystyle X(\mathbb {C} )}. Moreover, geometric properties ofXare reflected by the ones ofX(C){\displaystyle X(\mathbb {C} )}. For example, the latter is acomplex manifoldif and only ifXis smooth; it is compact if and only ifXis proper overC{\displaystyle \mathbb {C} }. Complex projective space is aKähler manifold. This implies that, for any projective algebraic varietyX,X(C){\displaystyle X(\mathbb {C} )}is a compact Kähler manifold. The converse is not in general true, but theKodaira embedding theoremgives a criterion for a Kähler manifold to be projective. In low dimensions, there are the following results: Chow's theoremprovides a striking way to go the other way, from analytic to algebraic geometry. It states that every analytic subvariety of a complex projective space is algebraic. The theorem may be interpreted to saying that aholomorphic functionsatisfying certain growth condition is necessarily algebraic: "projective" provides this growth condition. One can deduce from the theorem the following: Chow's theorem can be shown via Serre'sGAGA principle. Its main theorem states: The complex manifold associated to an abelian varietyAoverC{\displaystyle \mathbb {C} }is a compactcomplex Lie group. These can be shown to be of the form and are also referred to ascomplex tori. Here,gis the dimension of the torus andLis a lattice (also referred to asperiod lattice). According to theuniformization theoremalready mentioned above, any torus of dimension 1 arises from an abelian variety of dimension 1, i.e., from anelliptic curve. In fact, theWeierstrass's elliptic function℘{\displaystyle \wp }attached toLsatisfies a certain differential equation and as a consequence it defines a closed immersion:[33] There is ap-adic analog, thep-adic uniformizationtheorem. For higher dimensions, the notions of complex abelian varieties and complex tori differ: onlypolarizedcomplex tori come from abelian varieties. The fundamentalKodaira vanishing theoremstates that for an ample line bundleL{\displaystyle {\mathcal {L}}}on a smooth projective varietyXover a field of characteristic zero, fori> 0, or, equivalently by Serre dualityHi(X,L−1)=0{\displaystyle H^{i}(X,{\mathcal {L}}^{-1})=0}fori<n.[34]The first proof of this theorem used analytic methods of Kähler geometry, but a purely algebraic proof was found later. The Kodaira vanishing in general fails for a smooth projective variety in positive characteristic. Kodaira's theorem is one of various vanishing theorems, which give criteria for higher sheaf cohomologies to vanish. Since the Euler characteristic of a sheaf (see above) is often more manageable than individual cohomology groups, this often has important consequences about the geometry of projective varieties.[35]
https://en.wikipedia.org/wiki/Projective_scheme
In linguisticmorphology,inflection(less commonly,inflexion) is a process ofword formation[1]in which a word is modified to express differentgrammatical categoriessuch astense,case,voice,aspect,person,number,gender,mood,animacy, anddefiniteness.[2]The inflection ofverbsis calledconjugation, while the inflection ofnouns,adjectives,adverbs, etc.[a]can be calleddeclension. An inflection expresses grammatical categories withaffixation(such asprefix,suffix,infix,circumfix, andtransfix),apophony(asIndo-European ablaut), or other modifications.[3]For example, the Latin verbducam, meaning "I will lead", includes the suffix-am, expressing person (first), number (singular), and tense-mood (future indicative or present subjunctive). The use of this suffix is an inflection. In contrast, in the English clause "I will lead", the wordleadis not inflected for any of person, number, or tense; it is simply thebare formof a verb. The inflected form of a word often contains both one or morefree morphemes(a unit of meaning which can stand by itself as a word), and one or morebound morphemes(a unit of meaning which cannot stand alone as a word). For example, the English wordcarsis a noun that is inflected fornumber, specifically to express the plural; the content morphemecaris unbound because it could stand alone as a word, while the suffix-sis bound because it cannot stand alone as a word. These two morphemes together form the inflected wordcars. Words that are never subject to inflection are said to beinvariant; for example, the English verbmustis an invariant item: it never takes a suffix or changes form to signify a different grammatical category. Its categories can be determined only from its context. Languages that seldom make use of inflection, such asEnglish, are said to beanalytic. Analytic languages that do not make use ofderivational morphemes, such asStandard Chinese, are said to beisolating. Requiring the forms or inflections of more than one word in a sentence to be compatible with each other according to the rules of the language is known asconcordoragreement. For example, in "the man jumps", "man" is a singular noun, so "jump" is constrained in the present tense to use the third person singular suffix "s". Languages that have some degree of inflection aresynthetic languages. They can be highly inflected (such asGeorgianorKichwa), moderately inflected (such asRussianorLatin), weakly inflected (such asEnglish), but not uninflected (such asChinese). Languages that are so inflected that a sentence can consist of a single highly inflected word (such as manyNative American languages) are calledpolysynthetic languages. Languages in which each inflection conveys only a single grammatical category, such asFinnish, are known asagglutinative languages, while languages in which a single inflection can convey multiple grammatical roles (such as both nominative case and plural, as in Latin andGerman) are calledfusional. In English most nouns are inflected fornumberwith the inflectional pluralaffix-s(as in "dog" → "dog-s"), and most English verbs are inflected fortensewith the inflectional past tense affix-ed(as in "call" → "call-ed"). English also inflects verbs by affixation to mark the third person singular in the present tense (with-s), and the present participle (with-ing). English short adjectives are inflected to mark comparative and superlative forms (with-erand-estrespectively). There are eightregularinflectional affixes in the English language.[4][5] Despite the march toward regularization, modern English retains traces of its ancestry, with a minority of its words still using inflection byablaut(sound change, mostly in verbs) andumlaut(a particular type of sound change, mostly in nouns), as well as long-short vowel alternation. For example: For details, seeEnglish plural,English verbs, andEnglish irregular verbs. When a givenword classis subject to inflection in a particular language, there are generally one or more standard patterns of inflection (theparadigmsdescribed below) that words in that class may follow. Words which follow such a standard pattern are said to beregular; those that inflect differently are calledirregular. For instance, many languages that featureverbinflection have bothregular verbs and irregular verbs. In English, regular verbs form theirpast tenseandpast participlewith the ending-[e]d. Therefore, verbs likeplay,arriveandenterare regular, while verbs likesing,keepandgoare irregular. Irregular verbs often preserve patterns that were regular in past forms of the language, but which have now become anomalous; in rare cases, there are regular verbs that were irregular in past forms of the language. (For more details seeEnglish verbsandEnglish irregular verbs.) Other types of irregular inflected form include irregularpluralnouns, such as the Englishmice,childrenandwomen(seeEnglish plural) and the Frenchyeux(the plural ofœil, "eye"); and irregularcomparativeandsuperlativeforms of adjectives or adverbs, such as the Englishbetterandbest(which correspond to the positive formgoodorwell). Irregularities can have four basic causes:[citation needed] For more details on some of the considerations that apply to regularly and irregularly inflected forms, see the article onregular and irregular verbs. Two traditional grammatical terms refer to inflections of specificword classes: An organized list of the inflected forms of a givenlexemeor root word is called itsdeclensionif it is a noun, or itsconjugationif it is a verb. Below is the declension of the English pronounI, which is inflected for case and number. The pronounwhois also inflected according to case. Its declension isdefective, in the sense that it lacks a reflexive form. The following table shows the conjugation of the verbto arrivein the indicativemood:suffixesinflect it for person, number, and tense: Thenon-finite formsarrive(bare infinitive),arrived(past participle) andarriving(gerund/present participle), although not inflected for person or number, can also be regarded as part of the conjugation of the verbto arrive.Compound verb forms, such asI have arrived,I had arrived, orI will arrive, can be included also in the conjugation of the verb for didactic purposes, but they are not overt inflections ofarrive. The formula for deriving the covert form, in which the relevant inflections do not occur in the main verb, is Aninflectional paradigmrefers to a pattern (usually a set of inflectional endings), where a class of words follow the same pattern. Nominal inflectional paradigms are calleddeclensions, and verbal inflectional paradigms are termedconjugations. For instance, there are five types ofLatin declension. Words that belong to the first declension usually end in-aand are usually feminine. These words share a common inflectional framework. InOld English, nouns are divided into two major categories of declension, thestrongandweakones, as shown below: The terms "strong declension" and "weak declension" are primarily relevant to well-knowndependent-marking languages[citation needed](such as theIndo-European languages,[citation needed]orJapanese). In dependent-marking languages, nouns in adpositional (prepositional or postpositional) phrases can carry inflectional morphemes. Inhead-marking languages, the adpositions can carry the inflection in adpositional phrases. This means that these languages will have inflected adpositions. InWestern Apache(San Carlosdialect), the postposition-ká’'on' is inflected for person and number with prefixes: Traditional grammars have specific terms for inflections of nouns and verbs but not for those ofadpositions.[clarification needed] Inflection is the process of addinginflectionalmorphemesthat modify a verb's tense, mood, aspect, voice, person, or number or a noun's case, gender, or number, rarely affecting the word's meaning or class. Examples of applying inflectional morphemes to words are adding -sto the rootdogto formdogsand adding -edtowaitto formwaited. In contrast,derivationis the process of addingderivational morphemes, which create a new word from existing words and change the semantic meaning or the part of speech of the affected word, such as by changing a noun to a verb.[6] Distinctions between verbalmoodsare mainly indicated by derivational morphemes. Words are rarely listed in dictionaries on the basis of their inflectional morphemes (in which case they would be lexical items). However, they often are listed on the basis of their derivational morphemes. For instance, English dictionaries listreadableandreadability, words with derivational suffixes, along with their rootread. However, no traditional English dictionary listsbookas one entry andbooksas a separate entry; the same goes forjumpandjumped. Languages that add inflectional morphemes to words are sometimes calledinflectional languages, which is a synonym forinflected languages. Morphemes may be added in several different ways: Reduplicationis a morphological process where a constituent is repeated. The direct repetition of a word or root is calledtotal reduplication(orfull reduplication). The repetition of a segment is referred to aspartial reduplication. Reduplication can serve bothderivationaland inflectional functions. A few examples are given below: Palancar and Léonard provided an example withTlatepuzco Chinantec(anOto-Manguean languagespoken in SouthernMexico), where tones are able to distinguish mood, person, and number:[12][13] Case can be distinguished with tone as well, as inMaasai language(aNilo-Saharan languagespoken inKenyaandTanzania) (Hyman, 2016):[14] Because theProto-Indo-European languagewas highly inflected, all of its descendantIndo-European languages, such asAlbanian,Armenian,English,German,Ukrainian,Russian,Persian,Kurdish,Italian,Irish,Spanish,French,Hindi,Marathi,Urdu,Bengali, andNepali, are inflected to a greater or lesser extent. In general, older Indo-European languages such asLatin,Ancient Greek,Old English,Old Norse,Old Church SlavonicandSanskritare extensively inflected because of their temporal proximity to Proto-Indo-European.Deflexionhas caused modern versions of some Indo-European languages that were previously highly inflected to be much less so; an example is Modern English, as compared to Old English. In general, languages where deflexion occurs replace inflectional complexity with more rigorousword order, which provides the lost inflectional details. MostSlavic languagesand someIndo-Aryan languagesare an exception to the general Indo-European deflexion trend, continuing to be highly inflected (in some cases acquiring additional inflectional complexity andgrammatical genders, as inCzech&Marathi). Old Englishwas a moderately inflected language, using an extensive case system similar to that of modernIcelandic,FaroeseorGerman. Middle and Modern English lost progressively more of the Old English inflectional system. Modern English is considered a weakly inflected language, since its nouns have only vestiges of inflection (plurals, the pronouns), and its regular verbs have only four forms: an inflected form for the past indicative and subjunctive (looked), an inflected form for the third-person-singular present indicative (looks), an inflected form for the present participle (looking), and an uninflected form for everything else (look). While the English possessive indicator's(as in "Jennifer's book") is a remnant of the Old Englishgenitive casesuffix, it is now considered by syntacticians not to be a suffix but aclitic,[15]although some linguists argue that it has properties of both.[16] Old Norsewas inflected, but modernSwedish,Norwegian, andDanishhave lost much of their inflection.Grammatical casehas largely died out with the exception ofpronouns, just like English. However,adjectives,nouns,determinersandarticlesstill have different forms according to grammatical number and grammatical gender. Danish and Swedish only inflect for two different genders while Norwegian has to some degree retained the feminine forms and inflects for three grammatical genders like Icelandic. However, in comparison to Icelandic, there are considerably fewer feminine forms left in the language. In comparison,Icelandicpreserves almost all of theinflections of Old Norseand remains heavily inflected. It retains all the grammatical cases from Old Norse and is inflected for number and three different grammatical genders. Thedual number formsare however almost completely lost in comparison to Old Norse. Unlike other Germanic languages, nouns are inflected fordefinitenessin all Scandinavian languages, like in the following case forNorwegian (nynorsk): Adjectives andparticiplesare also inflected for definiteness in all Scandinavian languages like inProto-Germanic. ModernGermanremains moderately inflected, retaining four noun cases, although the genitive started falling into disuse in all but formal writing inEarly New High German. The case system ofDutch, simpler than that of German, is also simplified in common usage.Afrikaans, recognized as a distinct language in its own right rather than a Dutch dialect only in the early 20th century, has lost almost all inflection. TheRomance languages, such asSpanish,Italian,French,Portugueseand especially – with its many cases –Romanian, have more overt inflection than English, especially inverb conjugation. Adjectives, nouns and articles are considerably less inflected than verbs, but they still have different forms according to number and grammatical gender. Latin, the mother tongue of the Romance languages, was highly inflected; nouns and adjectives had different forms according to sevengrammatical cases(including five major ones) with five major patterns of declension, and three genders instead of the two found in most Romance tongues. There were four patterns of conjugation in six tenses, three moods (indicative, subjunctive, imperative, plus the infinitive, participle, gerund, gerundive, and supine) and two voices (passive and active), all overtly expressed by affixes (passive voice forms were periphrastic in three tenses). TheBaltic languagesare highly inflected. Nouns and adjectives are declined in up to seven overt cases. Additional cases are defined in various covert ways. For example, aninessive case, anillative case, anadessive caseandallative caseare borrowed from Finnic.Latvianhas only one overtlocative casebut itsyncretizesthe above four cases to the locative marking them by differences in the use of prepositions.[17]Lithuanian breaks them out of thegenitive case,accusative caseandlocative caseby using different postpositions.[18] Dual formis obsolete in standard Latvian and nowadays it is also considered nearly obsolete in standard Lithuanian. For instance, in standard Lithuanian it is normal to say "dvi varnos (plural) – two crows" instead of "dvi varni (dual)". Adjectives, pronouns, and numerals are declined for number, gender, and case to agree with the noun they modify or for which they substitute. Baltic verbs are inflected for tense, mood, aspect, and voice. They agree with the subject in person and number (not in all forms in modern Latvian). AllSlavic languagesmake use of a high degree of inflection, typically having six or seven cases and three genders for nouns and adjectives. However, the overt case system has disappeared almost completely in modernBulgarianandMacedonian. Most verb tenses and moods are also formed by inflection (however, some areperiphrastic, typically the future and conditional). Inflection is also present in adjective comparation and word derivation. Declensional endings depend on case (nominative, genitive, dative, accusative, locative, instrumental, vocative), number (singular, dual or plural), gender (masculine, feminine, neuter) and animacy (animate vs inanimate). Unusual in other language families, declension in most Slavic languages also depends on whether the word is a noun or an adjective. Slovene andSorbian languagesuse a rare third number, (in addition to singular and plural numbers) known asdual(in case of some words dual survived also inPolishand other Slavic languages). Modern Russian, Serbian and Czech also use a more complex form ofdual, but this misnomer applies instead to numbers 2, 3, 4, and larger numbers ending in 2, 3, or 4 (with the exception of the teens, which are handled as plural; thus, 102 is dual, but 12 or 127 are not). In addition, in some Slavic languages, such as Polish, word stems are frequently modified by the addition or absence of endings, resulting inconsonant and vowel alternation. Modern Standard Arabic(also called Literary Arabic) is an inflected language. It uses a system of independent and suffix pronouns classified by person and number and verbal inflections marking person and number. Suffix pronouns are used as markers ofpossessionand as objects of verbs and prepositions. Thetatweel(ـــ) marks where the verb stem, verb form, noun, or preposition is placed.[19] Arabicregional dialects(e.g.MoroccanArabic,EgyptianArabic,GulfArabic), used for everyday communication, tend to have less inflection than the more formal Literary Arabic. For example, inJordanianArabic, the second- and third-person feminine plurals (أنتنّantunnaandهنّhunna) and their respective unique conjugations are lost and replaced by the masculine (أنتمantumandهمhum), whereas in Lebanese and Syrian Arabic,همhumis replaced byهنّhunna. In addition, the system known asʾIʿrābplaces vowel suffixes on each verb, noun, adjective, and adverb, according to its function within a sentence and its relation to surrounding words.[19] TheUralic languagesareagglutinative, following from the agglutination inProto-Uralic. The largest languages areHungarian,Finnish, andEstonian—allEuropean Unionofficial languages. Uralic inflection is, or is developed from, affixing. Grammatical markers directly added to the word perform the same function as prepositions in English. Almost all words are inflected according to their roles in the sentence: verbs, nouns, pronouns, numerals, adjectives, and some particles. Hungarian and Finnish, in particular, often simply concatenate suffixes. For example, Finnishtalossanikinko"in my house, too?" consists oftalo-ssa-ni-kin-ko. However, in theFinnic languages(Finnish, Estonian etc.) and theSami languages, there are processes which affect the root, particularlyconsonant gradation. The original suffixes may disappear (and appear only by liaison), leaving behind the modification of the root. This process is extensively developed in Estonian and Sami, and makes them also inflected, not only agglutinating languages. The Estonianillative case, for example, is expressed by a modified root:maja→majja(historical form *maja-han). Though Altaic is widely considered to be asprachbundby linguists, three language families united by a small subset of linguists as theAltaic language family—Turkic,Mongolic, andManchu-Tungus—areagglutinative. The largest languages areTurkish,AzerbaijaniandUzbek—all Turkic languages. Altaic inflection is, or is developed from, affixing. Grammatical markers directly added to the word perform the same function as prepositions in English. Almost all words are inflected according to their roles in the sentence: verbs, nouns, pronouns, numerals, adjectives, and some particles. Basque, alanguage isolate, is a highly inflected language, heavily inflecting both nouns and verbs. Noun phrase morphology is agglutinative and consists of suffixes which simply attach to the end of a stem. These suffixes are in many cases fused with the article (-afor singular and-akfor plural), which in general is required to close a noun phrase in Basque if no other determiner is present, and unlike an article in many languages, it can only partially be correlated with the concept of definiteness. Proper nouns do not take an article, and indefinite nouns without the article (calledmugagabein Basque grammar) are highly restricted syntactically. Basque is an ergative language, meaning that inflectionally the single argument (subject) of an intransitive verb is marked in the same way as the direct object of a transitive verb. This is called theabsolutivecase and in Basque, as in most ergative languages, it is realized with a zero morph; in other words, it receives no special inflection. The subject of a transitive verb receives a special case suffix, called theergativecase.[20] There is no case marking concord in Basque and case suffixes, including those fused with the article, are added only to the last word in a noun phrase. Plurality is not marked on the noun and is identified only in the article or other determiner, possibly fused with a case marker. The examples below are in the absolutive case with zero case marking, and include the article only:[20] The noun phrase is declined for 11 cases:Absolutive, ergative, dative, possessive-genitive, benefactive, comitative, instrumental, inessive, allative, ablative,andlocal-genitive. These are signaled by suffixes that vary according to the categories ofSingular, Plural, Indefinite,andProper Noun, and many vary depending on whether the stem ends in a consonant or vowel. The Singular and Plural categories are fused with the article, and these endings are used when the noun phrase is not closed by any other determiner. This gives a potential 88 different forms, but the Indefinite and Proper Noun categories are identical in all but the local cases (inessive, allative, ablative, local-genitive), and many other variations in the endings can be accounted for by phonological rules operating to avoid impermissible consonant clusters. Local case endings are not normally added to animate Proper Nouns. The precise meaning of the local cases can be further specified by additional suffixes added after the local case suffixes.[20] Verb forms are extremely complex, agreeing with the subject, direct object, and indirect object; and include forms that agree with a "dative of interest" for intransitive verbs as well as allocutive forms where the verb form is altered if one is speaking to a close acquaintance. These allocutive forms also have different forms depending on whether the addressee is male or female. This is the only area in Basque grammar where gender plays any role at all.[20]Subordination could also plausibly be considered an inflectional category of the Basque verb since subordination is signaled by prefixes and suffixes on the conjugated verb, further multiplying the number of potential forms.[21] Transitivity is a thoroughgoing division of Basque verbs, and it is necessary to know the transitivity of a particular verb in order to conjugate it successfully. In the spoken language only a handful of commonly used verbs are fully conjugated in the present and simple past, most verbs being conjugated by means of an auxiliary which differs according to transitivity. The literary language includes a few more such verbs, but the number is still very small. Even these few verbs require an auxiliary to conjugate other tenses besides the present and simple past.[20] The most common intransitive auxiliary isizan, which is also the verb for "to be". The most common transitive auxiliary isukan, which is also the verb for "to have". (Other auxiliaries can be used in some of the tenses and may vary by dialect.) The compound tenses use an invariable form of the main verb (which appears in different forms according to the "tense group") and a conjugated form of the auxiliary. Pronouns are normally omitted if recoverable from the verb form. A couple of examples will have to suffice to demonstrate the complexity of the Basque verb:[20] Liburu-ak Book-PL.the saldu sell dizkiegu. AUX.3PL/ABS.3PL/DAT.1PL/ERG Liburu-ak saldu dizkiegu. Book-PL.the sell AUX.3PL/ABS.3PL/DAT.1PL/ERG "We sold the books to them." Kafe-a Coffee-the gusta-tzen please-HAB zaidak. AUX.ALLOC/M.3SG/ABS.1SG/DAT Kafe-a gusta-tzen zaidak. Coffee-the please-HAB AUX.ALLOC/M.3SG/ABS.1SG/DAT "I like coffee." ("Coffee pleases me.")(Used when speaking to a male friend.) The morphs that represent the various tense/person/case/mood categories of Basque verbs, especially in the auxiliaries, are so highly fused that segmenting them into individual meaningful units is nearly impossible, if not pointless. Considering the multitude of forms that a particular Basque verb can take, it seems unlikely that an individual speaker would have an opportunity to utter them all in his or her lifetime.[22] Most languages in theMainland Southeast Asia linguistic area(such as thevarieties of Chinese,Vietnamese, andThai) are not overtly inflected, or show very little overt inflection, and are therefore consideredanalytic languages(also known asisolating languages). Standard Chinesedoes not possess overt inflectional morphology. While some languages indicate grammatical relations with inflectional morphemes, Chinese utilizes word order andparticles. Consider the following examples: Both sentences mean 'The boy sees the girl.' This is becausepuer(boy) is singular nominative,puellam(girl) is singular accusative. Since the roles of puer and puellam have been marked with case endings, the change in position does not matter. The situation is very different in Chinese. Since Modern Chinese makes no use of inflection, the meanings ofwǒ('I' or 'me') andtā('he' or 'him') shall be determined with their position. InClassical Chinese, pronouns were overtly inflected to mark case. However, these overt case forms are no longer used; most of the alternative pronouns are considered archaic in modern Mandarin Chinese. Classically, 我 (wǒ) was used solely as the first person accusative. 吾 (Wú) was generally used as the first person nominative.[23] Certainvarieties of Chineseare known to express meaning by means of tone change, although further investigations are required[dubious–discuss]. Note that thetone changemust be distinguished fromtone sandhi.Tone sandhiis a compulsory change that occurs when certain tones are juxtaposed. Tone change, however, is a morphologically conditionedalternationand is used as an inflectional or a derivational strategy. Examples fromTaishanand Zhongshan (bothYue dialectsspoken inGuangdong Province) are shown below:[24] The following table compares the personal pronouns of Sixian dialect (a dialect ofTaiwanese Hakka)[25]with Zaiwa and Jingpho[26](bothTibeto-Burman languagesspoken inYunnanandBurma). The superscripted numbers indicate theChao tone numerals. InShanghainese, the third-person singular pronoun is overtly inflected as to case and the first- and second-person singular pronouns exhibit a change in tone depending on case.[citation needed] Japaneseshows a high degree of overt inflection of verbs, less so of adjectives, and very little of nouns, but it is mostly strictlyagglutinativeand extremely regular. Fusion of morphemes also happen in colloquial speech, for example: the causative-passive〜せられ〜(-serare-)fuses into〜され〜(-sare-), as in行かされる(ikasareru, "is made to go"), and the non-past progressive〜ている(-teiru)fuses into〜てる(-teru)as in食べてる(tabeteru, "is eating"). Formally, every noun phrase must bemarked for case, but this is done by invariable particles (cliticpostpositions). (Many[citation needed]grammarians consider Japanese particles to be separate words, and therefore not an inflection, while others[citation needed]consider agglutination a type of overt inflection, and therefore consider Japanese nouns as overtly inflected.) Someauxiliary languages, such asLingua Franca Nova,Glosa, andFrater, have no inflection. Other auxiliary languages, such as Esperanto, Ido, and Interlingua have comparatively simple inflectional systems. InEsperanto, an agglutinative language, nouns and adjectives are inflected for case (nominative, accusative) and number (singular, plural), according to a simple paradigm without irregularities. Verbs are not inflected for person or number, but they are inflected for tense (past, present, future) and mood (indicative, infinitive, conditional, jussive). They also form active and passive participles, which may be past, present or future. All verbs are regular. Idohas a different form for each verbal tense (past, present, future, volitive and imperative) plus an infinitive, and both a present and past participle. There are though no verbal inflections for person or number, and all verbs are regular. Nouns are marked for number (singular and plural), and the accusative case may be shown in certain situations, typically when the direct object of a sentence precedes its verb. On the other hand, adjectives are unmarked for gender, number or case (unless they stand on their own, without a noun, in which case they take on the same desinences as the missing noun would have taken). The definite article "la" ("the") remains unaltered regardless of gender or case, and also of number, except when there is no other word to show plurality. Pronouns are identical in all cases, though exceptionally the accusative case may be marked, as for nouns. Interlingua, in contrast with the Romance languages, has almost no irregular verb conjugations, and its verb forms are the same for all persons and numbers. It does, however, have compound verb tenses similar to those in the Romance, Germanic, and Slavic languages:ille ha vivite, "he has lived";illa habeva vivite, "she had lived". Nouns are inflected by number, taking a plural-s, but rarely by gender: only when referring to a male or female being. Interlingua has no noun-adjective agreement by gender, number, or case. As a result, adjectives ordinarily have no inflections. They may take the plural form if they are being used in place of a noun:le povres, "the poor".
https://en.wikipedia.org/wiki/Inflectional_morphology
Security testingis a process intended to detect flaws in thesecuritymechanisms of aninformation systemand as such help enable it to protect data and maintain functionality as intended.[1]Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements. Typical security requirements may include specific elements ofconfidentiality,integrity,authentication, availability, authorization andnon-repudiation.[2]Actual security requirements tested depend on the security requirements implemented by the system. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such, a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from. Integrity of information refers to protecting information from being modified by unauthorized parties This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labelling claims to be, or assuring that acomputer programis a trusted one. Common terms used for the delivery of security testing:
https://en.wikipedia.org/wiki/Security_testing
Mathematical inductionis a method forprovingthat a statementP(n){\displaystyle P(n)}is true for everynatural numbern{\displaystyle n}, that is, that the infinitely many casesP(0),P(1),P(2),P(3),…{\displaystyle P(0),P(1),P(2),P(3),\dots }all hold. This is done by first proving a simple case, then also showing that if we assume the claim is true for a given case, then the next case is also true. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder: Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (thebasis) and that from each rung we can climb up to the next one (thestep). Aproof by inductionconsists of two cases. The first, thebase case, proves the statement forn=0{\displaystyle n=0}without assuming any knowledge of other cases. The second case, theinduction step, proves thatifthe statement holds for any given casen=k{\displaystyle n=k},thenit must also hold for the next casen=k+1{\displaystyle n=k+1}. These two steps establish that the statement holds for every natural numbern{\displaystyle n}. The base case does not necessarily begin withn=0{\displaystyle n=0}, but often withn=1{\displaystyle n=1}, and possibly with any fixed natural numbern=N{\displaystyle n=N}, establishing the truth of the statement for all natural numbersn≥N{\displaystyle n\geq N}. The method can be extended to prove statements about more generalwell-foundedstructures, such astrees; this generalization, known asstructural induction, is used inmathematical logicandcomputer science. Mathematical induction in this extended sense is closely related torecursion. Mathematical induction is aninference ruleused informal proofs, and is the foundation of mostcorrectnessproofs for computer programs.[3] Despite its name, mathematical induction differs fundamentally frominductive reasoningasused in philosophy, in which the examination of many cases results in a probable conclusion. The mathematical method examines infinitely many cases to prove a general statement, but it does so by a finite chain ofdeductive reasoninginvolving thevariablen{\displaystyle n}, which can take infinitely many values. The result is a rigorous proof of the statement, not an assertion of its probability.[4] In 370 BC,Plato'sParmenidesmay have contained traces of an early example of an implicit inductive proof,[5]however, the earliest implicit proof by mathematical induction was written byal-Karajiaround 1000 AD, who applied it toarithmetic sequencesto prove thebinomial theoremand properties ofPascal's triangle. Whilst the original work was lost, it was later referenced byAl-Samawal al-Maghribiin his treatiseal-Bahir fi'l-jabr (The Brilliant in Algebra)in around 1150 AD.[6][7][8] Katz says in his history of mathematics Another important idea introduced by al-Karaji and continued by al-Samaw'al and others was that of an inductive argument for dealing with certain arithmetic sequences. Thus al-Karaji used such an argument to prove the result on the sums of integral cubes already known toAryabhata[...] Al-Karaji did not, however, state a general result for arbitraryn. He stated his theorem for the particular integer 10 [...] His proof, nevertheless, was clearly designed to be extendable to any other integer. [...] Al-Karaji's argument includes in essence the two basic components of a modern argument by induction, namely thetruthof the statement forn= 1 (1 = 13) and the deriving of the truth forn=kfrom that ofn=k- 1. Of course, this second component is not explicit since, in some sense, al-Karaji's argument is in reverse; this is, he starts fromn= 10 and goes down to 1 rather than proceeding upward. Nevertheless, his argument inal-Fakhriis the earliest extant proof ofthe sum formula for integral cubes.[9] In India, early implicit proofs by mathematical induction appear inBhaskara's "cyclic method".[10] None of these ancient mathematicians, however, explicitly stated the induction hypothesis. Another similar case (contrary to what Vacca has written, as Freudenthal carefully showed)[11]was that ofFrancesco Maurolicoin hisArithmeticorum libri duo(1575), who used the technique to prove that the sum of the firstnoddintegersisn2. The earliestrigoroususe of induction was byGersonides(1288–1344).[12][13]The first explicit formulation of the principle of induction was given byPascalin hisTraité du triangle arithmétique(1665). Another Frenchman,Fermat, made ample use of a related principle: indirect proof byinfinite descent. The induction hypothesis was also employed by the SwissJakob Bernoulli, and from then on it became well known. The modern formal treatment of the principle came only in the 19th century, withGeorge Boole,[14]Augustus De Morgan,Charles Sanders Peirce,[15][16]Giuseppe Peano, andRichard Dedekind.[10] The simplest and most common form of mathematical induction infers that a statement involving anatural numbern(that is, an integern≥ 0or 1) holds for all values ofn. The proof consists of two steps: The hypothesis in the induction step, that the statement holds for a particularn, is called theinduction hypothesisorinductive hypothesis. To prove the induction step, one assumes the induction hypothesis fornand then uses this assumption to prove that the statement holds forn+ 1. Authors who prefer to define natural numbers to begin at 0 use that value in the base case; those who define natural numbers to begin at 1 use that value. Mathematical induction can be used to prove the following statementP(n)for all natural numbersn.P(n):0+1+2+⋯+n=n(n+1)2.{\displaystyle P(n)\!:\ \ 0+1+2+\cdots +n={\frac {n(n+1)}{2}}.} This states a general formula for the sum of the natural numbers less than or equal to a given number; in fact an infinite sequence of statements:0=(0)(0+1)2{\displaystyle 0={\tfrac {(0)(0+1)}{2}}},0+1=(1)(1+1)2{\displaystyle 0+1={\tfrac {(1)(1+1)}{2}}},0+1+2=(2)(2+1)2{\displaystyle 0+1+2={\tfrac {(2)(2+1)}{2}}}, etc. Proposition.For everyn∈N{\displaystyle n\in \mathbb {N} },0+1+2+⋯+n=n(n+1)2.{\displaystyle 0+1+2+\cdots +n={\tfrac {n(n+1)}{2}}.} Proof.LetP(n)be the statement0+1+2+⋯+n=n(n+1)2.{\displaystyle 0+1+2+\cdots +n={\tfrac {n(n+1)}{2}}.}We give a proof by induction onn. Base case:Show that the statement holds for the smallest natural numbern= 0. P(0)is clearly true:0=0(0+1)2.{\displaystyle 0={\tfrac {0(0+1)}{2}}\,.} Induction step:Show that for everyk≥ 0, ifP(k)holds, thenP(k+ 1)also holds. Assume the induction hypothesis that for a particulark, the single casen=kholds, meaningP(k)is true:0+1+⋯+k=k(k+1)2.{\displaystyle 0+1+\cdots +k={\frac {k(k+1)}{2}}.}It follows that:(0+1+2+⋯+k)+(k+1)=k(k+1)2+(k+1).{\displaystyle (0+1+2+\cdots +k)+(k+1)={\frac {k(k+1)}{2}}+(k+1).} Algebraically, the right hand side simplifies as:k(k+1)2+(k+1)=k(k+1)+2(k+1)2=(k+1)(k+2)2=(k+1)((k+1)+1)2.{\displaystyle {\begin{aligned}{\frac {k(k+1)}{2}}+(k+1)&={\frac {k(k+1)+2(k+1)}{2}}\\&={\frac {(k+1)(k+2)}{2}}\\&={\frac {(k+1)((k+1)+1)}{2}}.\end{aligned}}} Equating the extreme left hand and right hand sides, we deduce that:0+1+2+⋯+k+(k+1)=(k+1)((k+1)+1)2.{\displaystyle 0+1+2+\cdots +k+(k+1)={\frac {(k+1)((k+1)+1)}{2}}.}That is, the statementP(k+ 1)also holds true, establishing the induction step. Conclusion:Since both the base case and the induction step have been proved as true, by mathematical induction the statementP(n)holds for every natural numbern.Q.E.D. Induction is often used to proveinequalities. As an example, we prove that|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}for anyreal numberx{\displaystyle x}and natural numbern{\displaystyle n}. At first glance, it may appear that a more general version,|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}for anyrealnumbersn,x{\displaystyle n,x}, could be proven without induction; but the casen=12,x=π{\textstyle n={\frac {1}{2}},\,x=\pi }shows it may be false for non-integer values ofn{\displaystyle n}. This suggests we examine the statement specifically fornaturalvalues ofn{\displaystyle n}, and induction is the readiest tool. Proposition.For anyx∈R{\displaystyle x\in \mathbb {R} }andn∈N{\displaystyle n\in \mathbb {N} },|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}. Proof.Fix an arbitrary real numberx{\displaystyle x}, and letP(n){\displaystyle P(n)}be the statement|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}. We induce onn{\displaystyle n}. Base case:The calculation|sin⁡0x|=0≤0=0|sin⁡x|{\displaystyle \left|\sin 0x\right|=0\leq 0=0\left|\sin x\right|}verifiesP(0){\displaystyle P(0)}. Induction step:We show theimplicationP(k)⟹P(k+1){\displaystyle P(k)\implies P(k+1)}for any natural numberk{\displaystyle k}. Assume the induction hypothesis: for a given valuen=k≥0{\displaystyle n=k\geq 0}, the single caseP(k){\displaystyle P(k)}is true. Using theangle addition formulaand thetriangle inequality, we deduce:|sin⁡(k+1)x|=|sin⁡kxcos⁡x+sin⁡xcos⁡kx|(angle addition)≤|sin⁡kxcos⁡x|+|sin⁡xcos⁡kx|(triangle inequality)=|sin⁡kx||cos⁡x|+|sin⁡x||cos⁡kx|≤|sin⁡kx|+|sin⁡x|(|cos⁡t|≤1)≤k|sin⁡x|+|sin⁡x|(induction hypothesis)=(k+1)|sin⁡x|.{\displaystyle {\begin{aligned}\left|\sin(k+1)x\right|&=\left|\sin kx\cos x+\sin x\cos kx\right|&&{\text{(angle addition)}}\\&\leq \left|\sin kx\cos x\right|+\left|\sin x\,\cos kx\right|&&{\text{(triangle inequality)}}\\&=\left|\sin kx\right|\left|\cos x\right|+\left|\sin x\right|\left|\cos kx\right|\\&\leq \left|\sin kx\right|+\left|\sin x\right|&&(\left|\cos t\right|\leq 1)\\&\leq k\left|\sin x\right|+\left|\sin x\right|&&{\text{(induction hypothesis}})\\&=(k+1)\left|\sin x\right|.\end{aligned}}} The inequality between the extreme left-hand and right-hand quantities shows thatP(k+1){\displaystyle P(k+1)}is true, which completes the induction step. Conclusion:The propositionP(n){\displaystyle P(n)}holds for all natural numbersn.{\displaystyle n.}Q.E.D. In practice, proofs by induction are often structured differently, depending on the exact nature of the property to be proven. All variants of induction are special cases oftransfinite induction; seebelow. If one wishes to prove a statement, not for all natural numbers, but only for all numbersngreater than or equal to a certain numberb, then the proof by induction consists of the following: This can be used, for example, to show that2n≥n+ 5forn≥ 3. In this way, one can prove that some statementP(n)holds for alln≥ 1, or even for alln≥ −5. This form of mathematical induction is actually a special case of the previous form, because if the statement to be proved isP(n)then proving it with these two rules is equivalent with provingP(n+b)for all natural numbersnwith an induction base case0.[17] Assume an infinite supply of 4- and 5-dollar coins. Induction can be used to prove that any whole amount of dollars greater than or equal to12can be formed by a combination of such coins. LetS(k)denote the statement "kdollars can be formed by a combination of 4- and 5-dollar coins". The proof thatS(k)is true for allk≥ 12can then be achieved by induction onkas follows: Base case:Showing thatS(k)holds fork= 12is simple: take three 4-dollar coins. Induction step:Given thatS(k)holds for some value ofk≥ 12(induction hypothesis), prove thatS(k+ 1)holds, too. AssumeS(k)is true for some arbitraryk≥ 12. If there is a solution forkdollars that includes at least one 4-dollar coin, replace it by a 5-dollar coin to makek+ 1dollars. Otherwise, if only 5-dollar coins are used,kmust be a multiple of 5 and so at least 15; but then we can replace three 5-dollar coins by four 4-dollar coins to makek+ 1dollars. In each case,S(k+ 1)is true. Therefore, by the principle of induction,S(k)holds for allk≥ 12, and the proof is complete. In this example, althoughS(k)also holds fork∈{4,5,8,9,10}{\textstyle k\in \{4,5,8,9,10\}}, the above proof cannot be modified to replace the minimum amount of12dollar to any lower valuem. Form= 11, the base case is actually false; form= 10, the second case in the induction step (replacing three 5- by four 4-dollar coins) will not work; let alone for even lowerm. It is sometimes desirable to prove a statement involving two natural numbers,nandm, by iterating the induction process. That is, one proves a base case and an induction step forn, and in each of those proves a base case and an induction step form. See, for example, theproof of commutativityaccompanyingaddition of natural numbers. More complicated arguments involving three or more counters are also possible. The method of infinite descent is a variation of mathematical induction which was used byPierre de Fermat. It is used to show that some statementQ(n)is false for all natural numbersn. Its traditional form consists of showing that ifQ(n)is true for some natural numbern, it also holds for some strictly smaller natural numberm. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) thatQ(n)cannot be true for anyn. The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statementP(n)defined as "Q(m)is false for all natural numbersmless than or equal ton", it follows thatP(n)holds for alln, which means thatQ(n)is false for every natural numbern. If one wishes to prove that a propertyPholds for all natural numbers less than or equal to a fixedN, proving thatPsatisfies the following conditions suffices:[18] The most common form of proof by mathematical induction requires proving in the induction step that∀k(P(k)→P(k+1)){\displaystyle \forall k\,(P(k)\to P(k+1))} whereupon the induction principle "automates"napplications of this step in getting fromP(0)toP(n). This could be called "predecessor induction" because each step proves something about a number from something about that number's predecessor. A variant of interest incomputational complexityis "prefix induction", in which one proves the following statement in the induction step:∀k(P(k)→P(2k)∧P(2k+1)){\displaystyle \forall k\,(P(k)\to P(2k)\land P(2k+1))}or equivalently∀k(P(⌊k2⌋)→P(k)){\displaystyle \forall k\,\left(P\!\left(\left\lfloor {\frac {k}{2}}\right\rfloor \right)\to P(k)\right)} The induction principle then "automates"log2napplications of this inference in getting fromP(0)toP(n). In fact, it is called "prefix induction" because each step proves something about a number from something about the "prefix" of that number — as formed by truncating the low bit of itsbinary representation. It can also be viewed as an application of traditional induction on the length of that binary representation. If traditional predecessor induction is interpreted computationally as ann-step loop, then prefix induction would correspond to a log-n-step loop. Because of that, proofs using prefix induction are "more feasibly constructive" than proofs using predecessor induction. Predecessor induction can trivially simulate prefix induction on the same statement. Prefix induction can simulate predecessor induction, but only at the cost of making the statement more syntactically complex (adding aboundeduniversal quantifier), so the interesting results relating prefix induction topolynomial-timecomputation depend on excluding unbounded quantifiers entirely, and limiting the alternation of bounded universal andexistentialquantifiers allowed in the statement.[19] One can take the idea a step further: one must prove∀k(P(⌊k⌋)→P(k)){\displaystyle \forall k\,\left(P\!\left(\left\lfloor {\sqrt {k}}\right\rfloor \right)\to P(k)\right)}whereupon the induction principle "automates"log lognapplications of this inference in getting fromP(0)toP(n). This form of induction has been used, analogously, to study log-time parallel computation.[citation needed] Another variant, calledcomplete induction,course of values inductionorstrong induction(in contrast to which the basic form of induction is sometimes known asweak induction), makes the induction step easier to prove by using a stronger hypothesis: one proves the statementP(m+1){\displaystyle P(m+1)}under the assumption thatP(n){\displaystyle P(n)}holds forallnatural numbersn{\displaystyle n}less thanm+1{\displaystyle m+1}; by contrast, the basic form only assumesP(m){\displaystyle P(m)}. The name "strong induction" does not mean that this method can prove more than "weak induction", but merely refers to the stronger hypothesis used in the induction step. In fact, it can be shown that the two methods are actually equivalent, as explained below. In this form of complete induction, one still has to prove the base case,P(0){\displaystyle P(0)}, and it may even be necessary to prove extra-base cases such asP(1){\displaystyle P(1)}before the general argument applies, as in the example below of theFibonacci numberFn{\displaystyle F_{n}}. Although the form just described requires one to prove the base case, this is unnecessary if one can proveP(m){\displaystyle P(m)}(assumingP(n){\displaystyle P(n)}for all lowern{\displaystyle n}) for allm≥0{\displaystyle m\geq 0}. This is a special case oftransfinite inductionas described below, although it is no longer equivalent to ordinary induction. In this form the base case is subsumed by the casem=0{\displaystyle m=0}, whereP(0){\displaystyle P(0)}is proved with no otherP(n){\displaystyle P(n)}assumed; this case may need to be handled separately, but sometimes the same argument applies form=0{\displaystyle m=0}andm>0{\displaystyle m>0}, making the proof simpler and more elegant. In this method, however, it is vital to ensure that the proof ofP(m){\displaystyle P(m)}does not implicitly assume thatm>0{\displaystyle m>0}, e.g. by saying "choose an arbitraryn<m{\displaystyle n<m}", or by assuming that a set ofmelements has an element. Complete induction is equivalent to ordinary mathematical induction as described above, in the sense that a proof by one method can be transformed into a proof by the other. Suppose there is a proof ofP(n){\displaystyle P(n)}by complete induction. Then, this proof can be transformed into an ordinary induction proof by assuming a stronger inductive hypothesis. LetQ(n){\displaystyle Q(n)}be the statement "P(m){\displaystyle P(m)}holds for allm{\displaystyle m}such that0≤m≤n{\displaystyle 0\leq m\leq n}"—this becomes the inductive hypothesis for ordinary induction. We can then showQ(0){\displaystyle Q(0)}andQ(n+1){\displaystyle Q(n+1)}forn∈N{\displaystyle n\in \mathbb {N} }assuming onlyQ(n){\displaystyle Q(n)}and show thatQ(n){\displaystyle Q(n)}impliesP(n){\displaystyle P(n)}.[20] If, on the other hand,P(n){\displaystyle P(n)}had been proven by ordinary induction, the proof would already effectively be one by complete induction:P(0){\displaystyle P(0)}is proved in the base case, using no assumptions, andP(n+1){\displaystyle P(n+1)}is proved in the induction step, in which one may assume all earlier cases but need only use the caseP(n){\displaystyle P(n)}. Complete induction is most useful when several instances of the inductive hypothesis are required for each induction step. For example, complete induction can be used to show thatFn=φn−ψnφ−ψ{\displaystyle F_{n}={\frac {\varphi ^{n}-\psi ^{n}}{\varphi -\psi }}}whereFn{\displaystyle F_{n}}is then-thFibonacci number, andφ=12(1+5){\textstyle \varphi ={\frac {1}{2}}(1+{\sqrt {5}})}(thegolden ratio) andψ=12(1−5){\textstyle \psi ={\frac {1}{2}}(1-{\sqrt {5}})}are therootsof thepolynomialx2−x−1{\displaystyle x^{2}-x-1}. By using the fact thatFn+2=Fn+1+Fn{\displaystyle F_{n+2}=F_{n+1}+F_{n}}for eachn∈N{\displaystyle n\in \mathbb {N} }, the identity above can be verified by direct calculation forFn+2{\textstyle F_{n+2}}if one assumes that it already holds for bothFn+1{\textstyle F_{n+1}}andFn{\textstyle F_{n}}. To complete the proof, the identity must be verified in the two base cases:n=0{\displaystyle n=0}andn=1{\textstyle n=1}. Another proof by complete induction uses the hypothesis that the statement holds forallsmallern{\displaystyle n}more thoroughly. Consider the statement that "everynatural numbergreater than 1 is a product of (one or more)prime numbers", which is the "existence" part of thefundamental theorem of arithmetic. For proving the induction step, the induction hypothesis is that for a givenm>1{\displaystyle m>1}the statement holds for all smallern>1{\displaystyle n>1}. Ifm{\displaystyle m}is prime then it is certainly a product of primes, and if not, then by definition it is a product:m=n1n2{\displaystyle m=n_{1}n_{2}}, where neither of the factors is equal to 1; hence neither is equal tom{\displaystyle m}, and so both are greater than 1 and smaller thanm{\displaystyle m}. The induction hypothesis now applies ton1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}, so each one is a product of primes. Thusm{\displaystyle m}is a product of products of primes, and hence by extension a product of primes itself. We shall look to prove the same example asabove, this time withstrong induction. The statement remains the same:S(n):n≥12⟹∃a,b∈N.n=4a+5b{\displaystyle S(n):\,\,n\geq 12\implies \,\exists \,a,b\in \mathbb {N} .\,\,n=4a+5b} However, there will be slight differences in the structure and the assumptions of the proof, starting with the extended base case. Proof. Base case:Show thatS(k){\displaystyle S(k)}holds fork=12,13,14,15{\displaystyle k=12,13,14,15}.4⋅3+5⋅0=124⋅2+5⋅1=134⋅1+5⋅2=144⋅0+5⋅3=15{\displaystyle {\begin{aligned}4\cdot 3+5\cdot 0=12\\4\cdot 2+5\cdot 1=13\\4\cdot 1+5\cdot 2=14\\4\cdot 0+5\cdot 3=15\end{aligned}}} The base case holds. Induction step:Given somej>15{\displaystyle j>15}, assumeS(m){\displaystyle S(m)}holds for allm{\displaystyle m}with12≤m<j{\displaystyle 12\leq m<j}. Prove thatS(j){\displaystyle S(j)}holds. Choosingm=j−4{\displaystyle m=j-4}, and observing that15<j⟹12≤j−4<j{\displaystyle 15<j\implies 12\leq j-4<j}shows thatS(j−4){\displaystyle S(j-4)}holds, by the inductive hypothesis. That is, the sumj−4{\displaystyle j-4}can be formed by some combination of4{\displaystyle 4}and5{\displaystyle 5}dollar coins. Then, simply adding a4{\displaystyle 4}dollar coin to that combination yields the sumj{\displaystyle j}. That is,S(j){\displaystyle S(j)}holds[21]Q.E.D. Sometimes, it is more convenient to deduce backwards, proving the statement forn−1{\displaystyle n-1}, given its validity forn{\displaystyle n}. However, proving the validity of the statement for no single number suffices to establish the base case; instead, one needs to prove the statement for an infinite subset of the natural numbers. For example,Augustin Louis Cauchyfirst used forward (regular) induction to prove theinequality of arithmetic and geometric meansfor allpowers of 2, and then used backwards induction to show it for all natural numbers.[22][23] The induction step must be proved for all values ofn. To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction thatall horses are of the same color:[24] Base case:in a set of onlyonehorse, there is only one color. Induction step:assume as induction hypothesis that within any set ofn{\displaystyle n}horses, there is only one color. Now look at any set ofn+1{\displaystyle n+1}horses. Number them:1,2,3,…,n,n+1{\displaystyle 1,2,3,\dotsc ,n,n+1}. Consider the sets{1,2,3,…,n}{\textstyle \left\{1,2,3,\dotsc ,n\right\}}and{2,3,4,…,n+1}{\textstyle \left\{2,3,4,\dotsc ,n+1\right\}}. Each is a set of onlyn{\displaystyle n}horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among alln+1{\displaystyle n+1}horses. The base casen=1{\displaystyle n=1}is trivial, and the induction step is correct in all casesn>1{\displaystyle n>1}. However, the argument used in the induction step is incorrect forn+1=2{\displaystyle n+1=2}, because the statement that "the two sets overlap" is false for{1}{\textstyle \left\{1\right\}}and{2}{\textstyle \left\{2\right\}}. Insecond-order logic, one can write down the "axiomof induction" as follows:∀P(P(0)∧∀k(P(k)→P(k+1))→∀n(P(n))),{\displaystyle \forall P\,{\Bigl (}P(0)\land \forall k{\bigl (}P(k)\to P(k+1){\bigr )}\to \forall n\,{\bigl (}P(n){\bigr )}{\Bigr )},}whereP(·)is a variable forpredicatesinvolving one natural number andkandnare variables fornatural numbers. In words, the base caseP(0)and the induction step (namely, that the induction hypothesisP(k)impliesP(k+ 1)) together imply thatP(n)for any natural numbern. The axiom of induction asserts the validity of inferring thatP(n)holds for any natural numbernfrom the base case and the induction step. The first quantifier in the axiom ranges overpredicatesrather than over individual numbers. This is a second-order quantifier, which means that this axiom is stated insecond-order logic. Axiomatizing arithmetic induction infirst-order logicrequires anaxiom schemacontaining a separate axiom for each possible predicate. The articlePeano axiomscontains further discussion of this issue. The axiom of structural induction for the natural numbers was first formulated by Peano, who used it to specify the natural numbers together with the following four other axioms: Infirst-orderZFC set theory, quantification over predicates is not allowed, but one can still express induction by quantification over sets:∀A(0∈A∧∀k∈N(k∈A→(k+1)∈A)→N⊆A){\displaystyle \forall A{\Bigl (}0\in A\land \forall k\in \mathbb {N} {\bigl (}k\in A\to (k+1)\in A{\bigr )}\to \mathbb {N} \subseteq A{\Bigr )}}Amay be read as a set representing a proposition, and containing natural numbers, for which the proposition holds. This is not an axiom, but a theorem, given that natural numbers are defined in the language of ZFC set theory by axioms, analogous to Peano's. Seeconstruction of the natural numbersusing theaxiom of infinityandaxiom schema of specification. One variation of the principle of complete induction can be generalized for statements about elements of anywell-founded set, that is, a set with anirreflexive relation< that contains noinfinite descending chains. Every set representing anordinal numberis well-founded, the set of natural numbers is one of them. Applied to a well-founded set, transfinite induction can be formulated as a single step. To prove that a statementP(n)holds for each ordinal number: This form of induction, when applied to a set of ordinal numbers (which form awell-orderedand hence well-foundedclass), is calledtransfinite induction. It is an important proof technique inset theory,topologyand other fields. Proofs by transfinite induction typically distinguish three cases: Strictly speaking, it is not necessary in transfinite induction to prove a base case, because it is avacuousspecial case of the proposition that ifPis true of alln<m, thenPis true ofm. It is vacuously true precisely because there are no values ofn<mthat could serve as counterexamples. So the special cases are special cases of the general case. The principle of mathematical induction is usually stated as anaxiomof the natural numbers; seePeano axioms. It is strictly stronger than thewell-ordering principlein the context of the other Peano axioms. Suppose the following: It can then be proved that induction, given the above-listed axioms, implies the well-ordering principle. The following proof uses complete induction and the first and fourth axioms. Proof.Suppose there exists anon-emptyset,S, of natural numbers that has no least element. LetP(n)be the assertion thatnis not inS. ThenP(0)is true, for if it were false then 0 is the least element ofS. Furthermore, letnbe a natural number, and supposeP(m)is true for all natural numbersmless thann+ 1. Then ifP(n+ 1)is falsen+ 1is inS, thus being a minimal element inS, a contradiction. ThusP(n+ 1)is true. Therefore, by the complete induction principle,P(n)holds for all natural numbersn; soSis empty, a contradiction. Q.E.D. On the other hand, the set{(0,n):n∈N}∪{(1,n):n∈N}{\displaystyle \{(0,n):n\in \mathbb {N} \}\cup \{(1,n):n\in \mathbb {N} \}}, shown in the picture, is well-ordered[25]: 35lfby thelexicographic order. Moreover, except for the induction axiom, it satisfies all Peano axioms, where Peano's constant 0 is interpreted as the pair (0, 0), and Peano'ssuccessorfunction is defined on pairs bysucc(x,n) = (x,n+ 1)for allx∈{0,1}{\displaystyle x\in \{0,1\}}andn∈N{\displaystyle n\in \mathbb {N} }. As an example for the violation of the induction axiom, define the predicateP(x,n)as(x,n) = (0, 0)or(x,n) = succ(y,m)for somey∈{0,1}{\displaystyle y\in \{0,1\}}andm∈N{\displaystyle m\in \mathbb {N} }. Then the base caseP(0, 0)is trivially true, and so is the induction step: ifP(x,n), thenP(succ(x,n)). However,Pis not true for all pairs in the set, sinceP(1,0)is false. Peano's axioms with the induction principle uniquely model the natural numbers. Replacing the induction principle with the well-ordering principle allows for more exotic models that fulfill all the axioms.[25] It is mistakenly printed in several books[25]and sources that the well-ordering principle is equivalent to the induction axiom. In the context of the other Peano axioms, this is not the case, but in the context of other axioms, they are equivalent;[25]specifically, the well-ordering principle implies the induction axiom in the context of the first two above listed axioms and A common mistake in many erroneous proofs is to assume thatn− 1is a unique and well-defined natural number, a property which is not implied by the other Peano axioms.[25]
https://en.wikipedia.org/wiki/Mathematical_induction
Meta-reference(ormetareference) is a category ofself-referencesoccurring in many media ormedia artifactslike published texts/documents, films, paintings, TV series, comic strips, or video games. It includes all references to, or comments on, a specific medium, medial artifact, or the media in general. These references and comments originate from a logically higher level (a "meta-level") within any given artifact, and draw attention to—or invite reflection about—media-related issues (e.g. the production, performance, or reception) of said artifact, specific other artifacts (as inparody), or to parts, or the entirety, of the medial system. It is, therefore, the recipient's awareness of an artifact's medial quality that distinguishes meta-reference from more general forms of self-reference. Thus, meta-reference triggers media-awareness within the recipient, who, in turn "becomes conscious of both the medial (or "fictional" in the sense of artificial and, sometimes in addition, "invented") status of the work" as well as "the fact that media-related phenomena are at issue, rather than (hetero-)references to the world outside the media."[1]Although certain devices, such asmise-en-abîme, may be conducive to meta-reference, they are not necessarily meta-referential themselves.[2]However, innately meta-referential devices (e.g.metalepsis) constitute a category of meta-references. While meta-reference as a concept is not a new phenomenon and can be observed in very early works of art and media not tied to specific purposes (e.g. Homer's invocation of the muses at the beginning of theOdysseyin order to deliver the epic better), the term itself is relatively new.[3]Earlier discussions of meta-referential issues often opt for more specific terminology tied to the respective discipline. Notable discussions of meta-reference include, but are not limited to, William H. Gass's[4]and Robert Scholes's[5]exploration ofmetafiction, Victor Stoichita's examination of early modern meta-painting,[6]and Lionel Abel's[7]investigation ofmetatheatre. In the context of drama, meta-reference has also become colloquially known as the breaking of thefourth wall. The first study to underscore the problem resulting from the lack of cohesive terminology, as well as the necessity to acknowledge meta-reference as transmedial and trans-generic phenomenon, was published in 2007 by Hauthal et al.[8]Publications by Nöth and Bishara[9]as well as Wolf[10]followed suit, raised similar concerns, included case studies from various media, coined and helped establish the more uniform umbrella term meta-reference as define above. While every medium has the potential for meta-reference, some media can transport meta-reference more easily than others. Media that can easily realise its meta-referential potential includes, for instance, literature, painting, and film. Although music can be meta-referential even outside the confines of lyrics, meta-reference in music is much harder to create or detect.[11][12]Music, therefore, would be a less typical medium for the occurrence of meta-reference. Nöth argues in this context that although non-verbal media can be the home of meta-reference, the contained meta-reference can only be implicit because non-verbal media can only show similarities, but never point directly (or explicitly) to meta-referential elements.[13]Others, however, argue that meta-reference is explicit as long as it is clear. John Fowles begins chapter 13 of his novelThe French Lieutenant's Womanwith the words ThisstoryI amtellingis allimagination. ThesecharactersI create never existed outside my own mind. If I have pretended until now to know mycharacters' mind and innermost thoughts, it is because I amwritingin [...] aconventionuniversally accepted at the time of mystory: that thenoveliststands next to God.[14][emphases added] This is an example of explicit meta-reference because the text draws attention to the fact that the novel the recipient is reading is merely a fiction created by the author. It also foregrounds the convention that readers ofrealist fictionaccept the presence of an all-knowing narrator, and breaks it by allowing the narrator to take centre stage which invites meta-reflections by the recipient. In American comic books published byMarvel Comics, the characterDeadpoolis aware that he is a fictional comic book character. He commonly breaks thefourth wall, to humorous effect. To other non-aware characters in the story, Deadpool's self-awareness as a comic book character appears to be a form ofpsychosis. When other characters question whether Deadpool's real name is even Wade Wilson, he jokes that his true identity depends on which writer the reader prefers.[15] The Truman Showis a movie that contains a high degree of meta-reference. Truman, the protagonist, is unaware that he is part of a reality TV show, but the audience knows about the artificiality of both Truman's life and, by extension, the movie that is being watched. This is underscored by putting emphasis on the production process of the fictional reality TV show, which makes the audience aware of the same features being used in the movie at the time of watching. Further examples of meta-reference in the movie include spotlights falling from the sky seemingly out of the blue, or a raincloud which is curiously only raining on Truman following him around on Seahaven Beach. Both instances point to the artificiality of Truman's life as well as the film itself. Other examples include films byMel Brooks, such asBlazing Saddles, which becomes a story about the production of the film, andSilent Movieis a silent movie about producing a silent movie. Additionally,The Muppet Movieand its sequels frequently showed characters referring to the movie script to see what should happen next. An example of meta-reference in painting isManet's BalconybyRené Magritte. It comments on another painting,The BalconybyÉdouard Manet, by mimicking both the setting of the balcony as well as the poses of the depicted people, but places them in coffins. Thus, the recipient's attention is drawn to the fact that not only are the people in the painting long dead and only still "alive" in the representation, but arguably also that the artist (Manet) and theimpressionistpainting style are just as dead as the portrayed individuals. Furthermore, it is foregrounded that theimpressionistpainting style is just a style that may be copied, which further emphasises the fact that both works are only paintings created in a specific way.
https://en.wikipedia.org/wiki/Meta-reference
Acypherpunkis one who advocates the widespread use of strongcryptographyandprivacy-enhancing technologiesas a means of effecting social and political change. The cypherpunk movement originated in the late 1980s and gained traction with the establishment of the "Cypherpunks"electronic mailing listin 1992, where informal groups of activists, technologists, andcryptographersdiscussed strategies to enhance individual privacy and resist state or corporatesurveillance. Deeplylibertarianin philosophy, the movement is rooted in principles ofdecentralization, individual autonomy, and freedom fromcentralized authority.[1][2]Its influence on society extends to the development of technologies that have reshaped global finance, communication, and privacy practices, such as the creation ofBitcoinand othercryptocurrencies, which embody cypherpunk ideals of decentralized and censorship-resistant money. The movement has also contributed to the mainstreaming of encryption in everyday technologies, such as secure messaging apps and privacy-focused web browsers. Until about the 1970s,cryptographywas mainly practiced in secret by military or spy agencies. However, that changed when two publications brought it into public awareness: the first publicly available work onpublic-key cryptography, byWhitfield DiffieandMartin Hellman,[3]and the US government publication of theData Encryption Standard(DES), ablock cipherwhich became very widely used. The technical roots of Cypherpunk ideas have been traced back to work by cryptographerDavid Chaumon topics such as anonymous digital cash and pseudonymous reputation systems, described in his paper "Security without Identification: Transaction Systems to Make Big Brother Obsolete" (1985).[4] In the late 1980s, these ideas coalesced into something like a movement.[4] In late 1992,Eric Hughes,Timothy C. May, andJohn Gilmorefounded a small group that met monthly at Gilmore's companyCygnus Solutionsin theSan Francisco Bay Areaand was humorously termedcypherpunksbyJude Milhonat one of the first meetings—derived fromcipherandcyberpunk.[5]In November 2006, the word was added to theOxford English Dictionary.[6] The Cypherpunksmailing listwas started in 1992, and by 1994 had 700 subscribers.[5]At its peak, it was a very active forum with technical discussions ranging over mathematics, cryptography, computer science, political and philosophical discussion, personal arguments and attacks, etc., with somespamthrown in. An email fromJohn Gilmorereports an average of 30 messages a day from December 1, 1996, to March 1, 1999, and suggests that the number was probably higher earlier.[7]The number of subscribers is estimated to have reached 2,000 in the year 1997.[5] In early 1997, Jim Choate and Igor Chudov set up the Cypherpunks Distributed Remailer,[8]a network of independent mailing list nodes intended to eliminate thesingle point of failureinherent in a centralized list architecture. At its peak, the Cypherpunks Distributed Remailer included at least seven nodes.[9]By mid-2005, al-qaeda.net ran the only remaining node.[10]In mid-2013, following a brief outage, the al-qaeda.net node's list software was changed fromMajordomotoGNU Mailman,[11]and subsequently the node was renamed to cpunks.org.[12]The CDR architecture is now defunct, though the list administrator stated in 2013 that he was exploring a way to integrate this functionality with the new mailing list software.[11] For a time, the cypherpunks mailing list was a popular tool with mailbombers,[13]who would subscribe a victim to the mailing list in order to cause a deluge of messages to be sent to him or her. (This was usually done as a prank, in contrast to the style of terrorist referred to as a mailbomber.) This precipitated the mailing list sysop(s) to institute a reply-to-subscribe system. Approximately two hundred messages a day was typical for the mailing list, divided between personal arguments and attacks, political discussion, technical discussion, and early spam.[14][15] The cypherpunks mailing list had extensive discussions of the public policy issues related to cryptography and on the politics and philosophy of concepts such as anonymity, pseudonyms, reputation, and privacy. These discussions continue both on the remaining node and elsewhere as the list has become increasingly moribund.[citation needed] Events such as theGURPS Cyberpunkraid[16]lent weight to the idea that private individuals needed to take steps to protect their privacy. In its heyday, the list discussed public policy issues related to cryptography, as well as more practical nuts-and-bolts mathematical, computational, technological, and cryptographic matters. The list had a range of viewpoints and there was probably no completely unanimous agreement on anything. The general attitude, though, definitely put personal privacy and personal liberty above all other considerations.[17] The list was discussing questions about privacy, government monitoring, corporate control of information, and related issues in the early 1990s that did not become major topics for broader discussion until at least ten years later. Some list participants were highly radical on these issues.[citation needed] Those wishing to understand the context of the list might refer to the history of cryptography; in the early 1990s, the US government considered cryptography software amunitionfor export purposes (PGPsource code was published as a paper book to bypass these regulations and demonstrate their futility). In 1992, a deal between NSA and SPA allowed export of cryptography based on 40-bit RC2 and RC4 which was considered relatively weak (and especially after SSL was created, there were many contests to break it). The US government had also tried to subvert cryptography through schemes such asSkipjackand key escrow. It was also not widely known that all communications were logged by government agencies (which would later be revealed during theNSAandAT&T scandals) though this was taken as an obvious axiom by list members[citation needed].[18] The original cypherpunk mailing list, and the first list spin-off,coderpunks, were originally hosted onJohn Gilmore's toad.com, but after a falling out with the sysop over moderation, the list was migrated to several cross-linked mail-servers in what was called the "distributed mailing list."[19][20]Thecoderpunkslist, open by invitation only, existed for a time.Coderpunkstook up more technical matters and had less discussion of public policy implications. There are several lists today that can trace their lineage directly to the original Cypherpunks list: the cryptography list (cryptography@metzdowd.com), the financial cryptography list (fc-announce@ifca.ai), and a small group of closed (invitation-only) lists as well.[citation needed] Toad.com continued to run with the existing subscriber list, those that didn't unsubscribe, and was mirrored on the new distributed mailing list, but messages from the distributed list didn't appear on toad.com.[21]As the list faded in popularity, so too did it fade in the number of cross-linked subscription nodes.[citation needed] To some extent, the cryptography list[22]acts as a successor to cypherpunks; it has many of the people and continues some of the same discussions. However, it is a moderated list, considerably less zany and somewhat more technical. A number of current systems in use trace to the mailing list, includingPretty Good Privacy,/dev/randomin theLinux kernel(the actual code has been completely reimplemented several times since then) and today'sanonymous remailers.[citation needed] The basic ideas can be found inA Cypherpunk's Manifesto(Eric Hughes, 1993): "Privacy is necessary for an open society in the electronic age. ... We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy ... We must defend our own privacy if we expect to have any. ... Cypherpunks write code. We know that someone has to write software to defend privacy, and ... we're going to write it."[23] Some are or were senior people at major hi-tech companies and others are well-known researchers (seelist with affiliationsbelow). The first mass media discussion of cypherpunks was in a 1993Wiredarticle bySteven LevytitledCrypto Rebels: The people in this room hope for a world where an individual's informational footprints—everything from an opinion on abortion to the medical record of an actual abortion—can be traced only if the individual involved chooses to reveal them; a world where coherent messages shoot around the globe by network and microwave, but intruders and feds trying to pluck them out of the vapor find only gibberish; a world where the tools of prying are transformed into the instruments of privacy. There is only one way this vision will materialize, and that is by widespread use of cryptography. Is this technologically possible? Definitely. The obstacles are political—some of the most powerful forces in government are devoted to the control of these tools. In short, there is a war going on between those who would liberate crypto and those who would suppress it. The seemingly innocuous bunch strewn around this conference room represents the vanguard of the pro-crypto forces. Though the battleground seems remote, the stakes are not: The outcome of this struggle may determine the amount of freedom our society will grant us in the 21st century. To the Cypherpunks, freedom is an issue worth some risk.[24] The three masked men on the cover of that edition ofWiredwere prominent cypherpunksTim May,Eric HughesandJohn Gilmore. Later, Levy wrote a book,Crypto: How the Code Rebels Beat the Government – Saving Privacy in the Digital Age,[25]covering thecrypto warsof the 1990s in detail. "Code Rebels" in the title is almost synonymous with cypherpunks. The termcypherpunkis mildly ambiguous. In most contexts it means anyone advocating cryptography as a tool for social change, social impact and expression. However, it can also be used to mean a participant in the Cypherpunkselectronic mailing listdescribedbelow. The two meanings obviously overlap, but they are by no means synonymous. Documents exemplifying cypherpunk ideas include Timothy C. May'sThe Crypto Anarchist Manifesto(1992)[26]andTheCyphernomicon(1994),[27]A Cypherpunk's Manifesto.[23] A very basic cypherpunk issue isprivacy in communicationsanddata retention. John Gilmore said he wanted "a guarantee -- with physics and mathematics, not with laws -- that we can give ourselves real privacy of personal communications."[28] Such guarantees requirestrong cryptography, so cypherpunks are fundamentally opposed to government policies attempting to control the usage or export of cryptography, which remained an issue throughout the late 1990s. TheCypherpunk Manifestostated "Cypherpunks deplore regulations on cryptography, for encryption is fundamentally a private act."[23] This was a central issue for many cypherpunks. Most were passionately opposed to various government attempts to limit cryptography—export laws, promotion of limited key length ciphers, and especiallyescrowed encryption. The questions ofanonymity,pseudonymityandreputationwere also extensively discussed. Arguably, the possibility ofanonymousspeech, and publication is vital for an open society and genuine freedom of speech—this is the position of most cypherpunks.[29] In general, cypherpunks opposed the censorship and monitoring from government and police. In particular, the US government'sClipper chipscheme forescrowed encryptionof telephone conversations (encryption supposedly secure against most attackers, but breakable by government) was seen asanathemaby many on the list. This was an issue that provoked strong opposition and brought many new recruits to the cypherpunk ranks. List participantMatt Blazefound a serious flaw[30]in the scheme, helping to hasten its demise. Steven Schear first suggested thewarrant canaryin 2002 to thwart the secrecy provisions ofcourt ordersandnational security letters.[31]As of 2013[update], warrant canaries are gaining commercial acceptance.[32] An important set of discussions concerns the use of cryptography in the presence of oppressive authorities. As a result, Cypherpunks have discussed and improvedsteganographicmethods that hide the use of crypto itself, or that allow interrogators to believe that they have forcibly extracted hidden information from a subject. For instance,Rubberhosewas a tool that partitioned and intermixed secret data on a drive with fake secret data, each of which accessed via a different password. Interrogators, having extracted a password, are led to believe that they have indeed unlocked the desired secrets, whereas in reality the actual data is still hidden. In other words, even its presence is hidden. Likewise, cypherpunks have also discussed under what conditions encryption may be used without being noticed bynetwork monitoringsystems installed by oppressive regimes. As theManifestosays, "Cypherpunks write code";[23]the notion that good ideas need to be implemented, not just discussed, is very much part of the culture of themailing list.John Gilmore, whose site hosted the original cypherpunks mailing list, wrote: "We are literally in a race between our ability to build and deploy technology, and their ability to build and deploy laws and treaties. Neither side is likely to back down or wise up until it has definitively lost the race."[33] Anonymous remailers such as theMixmaster Remailerwere almost entirely a cypherpunk development.[34]Other cypherpunk-related projects includePGPfor email privacy,[35]FreeS/WANforopportunistic encryptionof the whole net,Off-the-record messagingfor privacy inInternet chat, and theTorproject for anonymous web surfing. In 1998, theElectronic Frontier Foundation, with assistance from the mailing list, built a $200,000machinethat could brute-force aData Encryption Standardkey in a few days.[36]The project demonstrated that DES was, without question, insecure and obsolete, in sharp contrast to the US government's recommendation of the algorithm. Cypherpunks also participated, along with other experts, in several reports on cryptographic matters. One such paper was "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security".[37]It suggested 75 bits was theminimumkey size to allow an existing cipher to be considered secure and kept in service. At the time, theData Encryption Standardwith 56-bit keys was still a US government standard, mandatory for some applications. Other papers were critical analysis of government schemes. "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption",[38]evaluatedescrowed encryptionproposals.Comments on the Carnivore System Technical Review.[39]looked at anFBIscheme for monitoring email. Cypherpunks provided significant input to the 1996National Research Councilreport on encryption policy,Cryptography's Role In Securing the Information Society(CRISIS).[40]This report, commissioned by the U.S. Congress in 1993, was developed via extensive hearings across the nation from all interested stakeholders, by a committee of talented people. It recommended a gradual relaxation of the existing U.S. government restrictions on encryption. Like many such study reports, its conclusions were largely ignored by policy-makers. Later events such as the final rulings in the cypherpunks lawsuits forced a more complete relaxation of the unconstitutional controls on encryption software. Cypherpunks have filed a number of lawsuits, mostly suits against the US government alleging that some government action is unconstitutional. Phil Karnsued the State Department in 1994 over cryptography export controls[41]after they ruled that, while the bookApplied Cryptography[42]could legally be exported, a floppy disk containing a verbatim copy of code printed in the book was legally a munition and required an export permit, which they refused to grant. Karn also appeared before both House and Senate committees looking at cryptography issues. Daniel J. Bernstein, supported by theEFF, also sued over the export restrictions, arguing that preventing publication of cryptographic source code is an unconstitutional restriction on freedom of speech. He won, effectively overturning the export law. SeeBernstein v. United Statesfor details. Peter Jungeralso sued on similar grounds, and won.[citation needed][43] Cypherpunks encouraged civil disobedience, in particular,US law on the export of cryptography.[citation needed]Until 1997, cryptographic code was legally a munition and fell under ITAR, and the key length restrictions in the EAR was not removed until 2000.[44] In 1995 Adam Back wrote a version of theRSAalgorithm forpublic-key cryptographyin three lines ofPerl[45][46]and suggested people use it as an email signature file: Vince Cateput up a web page that invited anyone to become an international arms trafficker; every time someone clicked on the form, an export-restricted item—originallyPGP, later a copy of Back's program—would be mailed from a US server to one in Anguilla.[47][48][49] InNeal Stephenson's novelCryptonomiconmany characters are on the "Secret Admirers" mailing list. This is fairly obviously based on the cypherpunks list, and several well-known cypherpunks are mentioned in the acknowledgements. Much of the plot revolves around cypherpunk ideas; the leading characters are building a data haven which will allow anonymous financial transactions, and the book is full of cryptography. But, according to the author[50]the book's title is—in spite of its similarity—not based on the Cyphernomicon,[27]an online cypherpunk FAQ document. Cypherpunk achievements would later also be used on the Canadian e-wallet, theMintChip, and the creation ofbitcoin. It was an inspiration forCryptoPartydecades later to such an extent thatA Cypherpunk's Manifestois quoted at the header of its Wiki,[51]and Eric Hughes delivered the keynote address at the Amsterdam CryptoParty on 27 August 2012. Cypherpunks list participants included many notable computer industry figures. Most were list regulars, although not all would call themselves "cypherpunks".[52]The following is a list of noteworthy cypherpunks and their achievements: * indicates someone mentioned in the acknowledgements of Stephenson'sCryptonomicon.
https://en.wikipedia.org/wiki/Cypherpunk
Anidentity provider(abbreviatedIdPorIDP) is a system entity that creates, maintains, and manages identity information forprincipalsand also provides authentication services to relying applications within a federation or distributed network.[1]Identity providers offer user authentication as a service. Relying party applications, such as web applications, outsource the user authentication step to a trusted identity provider. Such a relying party application is said to befederated, that is, it consumesfederated identity. An identity provider is “a trusted provider that lets you usesingle sign-on(SSO) to access other websites.” SSO enhances usability by reducingpassword fatigue. It also provides better security by decreasing the potentialattack surface. Identity providers can facilitate connections betweencloud computingresources and users, thus decreasing the need for users to re-authenticate when using mobile and roaming applications.[citation needed] OpenID Connect(OIDC) is an identity layer on top ofOAuth. In the domain model associated with OIDC, an identity provider is a special type of OAuth 2.0 authorization server. Specifically, a system entity called an OpenID Provider issuesJSON-formatted identitytokensto OIDC relying parties via aRESTfulHTTPAPI. TheSecurity Assertion Markup Language(SAML) is a set of profiles for exchanging authentication and authorization data across security domains. In the SAML domain model, an identity provider is a special type of authentication authority. Specifically, aSAML identity provideris a system entity that issues authentication assertions in conjunction with an SSO profile of SAML. A relying party that consumes these authentication assertions is called aSAML service provider.[citation needed]
https://en.wikipedia.org/wiki/Identity_provider
Agraphoidis a set of statements of the form, "Xis irrelevant toYgiven that we knowZ" whereX,YandZare sets of variables. The notion of "irrelevance" and "given that we know" may obtain different interpretations, includingprobabilistic,relationaland correlational, depending on the application. These interpretations share common properties that can be captured by paths in graphs (hence the name "graphoid"). The theory of graphoids characterizes these properties in a finite set ofaxiomsthat are common to informational irrelevance and its graphical representations. Judea PearlandAzaria Paz[1]coined the term "graphoids" after discovering that a set of axioms that governconditional independenceinprobability theoryis shared byundirected graphs. Variables are represented as nodes in a graph in such a way that variable setsXandYare independent conditioned onZin the distribution whenever node setZseparatesXfromYin the graph. Axioms for conditional independence in probability were derived earlier byA. Philip Dawid[2]andWolfgang Spohn.[3]The correspondence between dependence and graphs was later extended todirected acyclic graphs(DAGs)[4][5][6]and to other models of dependency.[1][7] A dependency modelMis a subset of triplets (X,Z,Y) for which the predicateI(X,Z,Y):Xis independent ofYgivenZ, is true. A graphoid is defined as a dependency model that is closed under the following five axioms: A semi-graphoid is a dependency model closed under 1–4. These five axioms together are known as the graphoid axioms.[8]Intuitively, the weak union and contraction properties mean that irrelevant information should not alter the relevance status of other propositions in the system; what was relevant remains relevant and what was irrelevant remains irrelevant.[8] Conditional independence, defined as is a semi-graphoid which becomes a full graphoid whenPis strictly positive.[1][7] A dependency model is a correlational graphoid if in some probability function we have, whereρxy.z{\displaystyle \rho _{xy.z}}is thepartial correlationbetweenxandygiven setZ. In other words, the linear estimation error of the variables inXusing measurements onZwould not be reduced by adding measurements of the variables inY, thus makingYirrelevant to the estimation ofX. Correlational and probabilistic dependency models coincide for normal distributions.[1][7] A dependency model is a relational graphoid if it satisfies In words, the range of values permitted forXis not restricted by the choice ofY, onceZis fixed. Independence statements belonging to this model are similar toembedded multi-valued dependencies (EMVDs)in databases.[1][7] If there exists an undirected graphGsuch that, then the graphoid is called graph-induced. In other words, there exists an undirected graphGsuch that every independence statement inMis reflected as a vertex separation inGand vice versa. A necessary and sufficient condition for a dependency model to be a graph-induced graphoid is that it satisfies the following axioms: symmetry, decomposition, intersection, strong union and transitivity. Strong union states that Transitivity states that The axioms symmetry, decomposition, intersection, strong union and transitivity constitute a complete characterization of undirected graphs.[9] A graphoid is termed DAG-induced if there exists a directed acyclic graphDsuch thatI(X,Z,Y)⇔⟨X,Z,Y⟩D{\displaystyle I(X,Z,Y)\Leftrightarrow \langle X,Z,Y\rangle _{D}}where⟨X,Z,Y⟩D{\displaystyle \langle X,Z,Y\rangle _{D}}stands ford-separationinD.d-separation (d-connotes "directional") extends the notion of vertex separation from undirected graphs to directed acyclic graphs. It permits the reading of conditional independencies from the structure ofBayesian networks. However, conditional independencies in a DAG cannot be completely characterized by a finite set of axioms.[10] Graph-induced and DAG-induced graphoids are both contained in probabilistic graphoids.[11]This means that for every graphGthere exists a probability distributionPsuch that every conditional independence inPis represented inG, and vice versa. The same is true for DAGs. However, there are probabilistic distributions that are not graphoids and, moreover, there is no finite axiomatization for probabilistic conditional dependencies.[12] Thomas Verma showed that every semi-graphoid has a recursive way of constructing a DAG in which everyd-separation is valid.[13]The construction is similar to that used inBayes networksand goes as follows: The DAG created by this construction will represent all the conditional independencies that follow from those used in the construction. Furthermore, everyd-separation shown in the DAG will be a valid conditional independence in the graphoid used in the construction.
https://en.wikipedia.org/wiki/Graphoid
In theregulationofalgorithms, particularlyartificial intelligenceand its subfield ofmachine learning, aright to explanation(orright toanexplanation) is arightto be given anexplanationfor an output of the algorithm. Such rights primarily refer toindividual rightsto be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureauX reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for." Some suchlegal rightsalready exist, while the scope of a general "right to explanation" is a matter of ongoing debate. There have been arguments made that a "social right to explanation" is a crucial foundation for an information society, particularly as the institutions of that society will need to use digital technologies, artificial intelligence, machine learning.[1]In other words, that the relatedautomated decision makingsystems that useexplainabilitywould be moretrustworthyand transparent. Without this right, which could be constituted both legally and throughprofessional standards, the public will be left without much recourse to challenge the decisions of automated systems. Under theEqual Credit Opportunity Act(Regulation B of theCode of Federal Regulations), Title 12, Chapter X, Part 1002,§1002.9, creditors are required to notify applicants who are denied credit with specific reasons for the detail. As detailed in §1002.9(b)(2):[2] (2) Statement of specific reasons. The statement of reasons for adverse action required by paragraph (a)(2)(i) of this section must be specific and indicate the principal reason(s) for the adverse action. Statements that the adverse action was based on the creditor's internal standards or policies or that the applicant, joint applicant, or similar party failed to achieve a qualifying score on the creditor's credit scoring system are insufficient. Theofficial interpretationof this section details what types of statements are acceptable. Creditors comply with this regulation by providing a list of reasons (generally at most 4, per interpretation of regulations), consisting of a numericreason code(as identifier) and an associated explanation, identifying the main factors affecting a credit score.[3]An example might be:[4] The European UnionGeneral Data Protection Regulation(enacted 2016, taking effect 2018) extends the automated decision-making rights in the 1995Data Protection Directiveto provide a legally disputed form of a right to an explanation, stated as such inRecital 71: "[the data subject should have] the right ... to obtain an explanation of the decision reached". In full: The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention. ... In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision. However, the extent to which the regulations themselves provide a "right to explanation" is heavily debated.[5][6][7]There are two main strands of criticism. There are significant legal issues with the right as found in Article 22 — as recitals are not binding, and the right to an explanation is not mentioned in the binding articles of the text, having been removed during the legislative process.[6]In addition, there are significant restrictions on the types ofautomated decisionsthat are covered — which must be both "solely" based on automated processing, and have legal or similarly significant effects — which significantly limits the range of automated systems and decisions to which the right would apply.[6]In particular, the right is unlikely to apply in many of the cases of algorithmic controversy that have been picked up in the media.[8] A second potential source of such a right has been pointed to in Article 15, the "right of access by the data subject". This restates a similar provision from the 1995 Data Protection Directive, allowing the data subject access to "meaningful information about the logic involved" in the same significant, solely automated decision-making, found in Article 22. Yet this too suffers from alleged challenges that relate to the timing of when this right can be drawn upon, as well as practical challenges that mean it may not be binding in many cases of public concern.[6] Other EU legislative instruments contain explanation rights. The European Union'sArtificial Intelligence Actprovides in Article 86 a "[r]ight to explanation of individual decision-making" of certain high risk systems which produce significant, adverse effects to an individual's health, safety or fundamental rights.[9]The right provides for "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken", although only applies to the extent other law does not provide such a right. TheDigital Services Actin Article 27, and the Platform to Business Regulation in Article 5,[10]both contain rights to have the main parameters of certainrecommender systemsto be made clear, although these provisions have been criticised as not matching the way that such systems work.[11]ThePlatform Work Directive, which provides for regulation of automation ingig economywork as an extension ofdata protectionlaw, further contains explanation provisions in Article 11,[12]using the specific language of "explanation" in a binding article rather than a recital as is the case in the GDPR. Scholars note that remains uncertainty as to whether these provisions imply sufficiently tailored explanation in practice which will need to be resolved by courts.[13] InFrancethe 2016Loi pour une République numérique(Digital Republic Act orloi numérique) amends the country's administrative code to introduce a new provision for the explanation of decisions made by public sector bodies about individuals.[14]It notes that where there is "a decision taken on the basis of an algorithmic treatment", the rules that define that treatment and its “principal characteristics” must be communicated to the citizen upon request, where there is not an exclusion (e.g. for national security or defence). These should include the following: Scholars have noted that this right, while limited to administrative decisions, goes beyond the GDPR right to explicitly apply to decision support rather than decisions "solely" based on automated processing, as well as provides a framework for explaining specific decisions.[14]Indeed, the GDPR automated decision-making rights in the European Union, one of the places a "right to an explanation" has been sought within, find their origins in French law in the late 1970s.[15] Some argue that a "right to explanation" is at best unnecessary, at worst harmful, and threatens to stifle innovation. Specific criticisms include: favoring human decisions over machine decisions, being redundant with existing laws, and focusing on process over outcome.[16] Authors of study “Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For” Lilian Edwards and Michael Veale argue that a right to explanation is not the solution to harms caused to stakeholders by algorithmic decisions. They also state that the right of explanation in the GDPR is narrowly-defined, and is not compatible with how modern machine learning technologies are being developed. With these limitations, defining transparency within the context ofalgorithmic accountabilityremains a problem. For example, providing the source code of algorithms may not be sufficient and may create other problems in terms of privacy disclosures and the gaming of technical systems. To mitigate this issue, Edwards and Veale argue that an auditing system could be more effective, to allow auditors to look at the inputs and outputs of a decision process from an external shell, in other words, “explaining black boxes without opening them.”[8] Similarly, Oxford scholars Bryce Goodman and Seth Flaxman assert that the GDPR creates a ‘right to explanation’, but does not elaborate much beyond that point, stating the limitations in the current GDPR. In regards to this debate, scholars Andrew D Selbst and Julia Powles state that the debate should redirect to discussing whether one uses the phrase ‘right to explanation’ or not, more attention must be paid to the GDPR's express requirements and how they relate to its background goals, and more thought must be given to determining what the legislative text actually means.[17] More fundamentally, many algorithms used in machine learning are not easily explainable. For example, the output of adeep neural networkdepends on many layers of computations, connected in a complex way, and no one input or computation may be a dominant factor. The field ofExplainable AIseeks to provide better explanations from existing algorithms, and algorithms that are more easily explainable, but it is a young and active field.[18][19] Others argue that the difficulties with explainability are due to its overly narrow focus on technical solutions rather than connecting the issue to the wider questions raised by a "social right to explanation."[1] Edwards and Veale see the right to explanation as providing some grounds for explanations about specific decisions. They discuss two types of algorithmic explanations, model centric explanations and subject-centric explanations (SCEs), which are broadly aligned with explanations about systems or decisions.[8] SCEs are seen as the best way to provide for some remedy, although with some severe constraints if the data is just too complex. Their proposal is to break down the full model and focus on particular issues through pedagogical explanations to a particular query, “which could be real or could be fictitious or exploratory”. These explanations will necessarily involve trade offs with accuracy to reduce complexity. With growing interest in explanation of technical decision-making systems in the field of human-computer interaction design, researchers and designers put in efforts to open the black box in terms of mathematically interpretable models as removed from cognitive science and the actual needs of people. Alternative approaches would be to allow users to explore the system's behavior freely through interactive explanations. One of Edwards and Veale's proposals is to partially remove transparency as a necessary key step towards accountability and redress. They argue that people trying to tackle data protection issues have a desire for an action, not for an explanation. The actual value of an explanation will not be to relieve or redress the emotional or economic damage suffered, but to understand why something happened and helping ensure a mistake doesn't happen again.[8] On a broader scale, In the studyExplainable machine learning in deployment,authors recommend building an explainable framework clearly establishing the desiderata by identifying stakeholder, engaging with stakeholders, and understanding the purpose of the explanation. Alongside, concerns of explainability such as issues on causality, privacy, and performance improvement must be considered into the system.[20]
https://en.wikipedia.org/wiki/Right_to_explanation
Censorshipis the suppression ofspeech, public communication, or otherinformation. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient".[2][3][4]Censorship can be conducted bygovernments[5]and private institutions.[6]When an individual such as an author or other creator engages in censorship of their own works or speech, it is referred to asself-censorship. General censorship occurs in a variety of different media, including speech, books, music, films, and other arts,the press, radio, television, and the Internet for a variety of claimed reasons includingnational security, to controlobscenity,pornography, andhate speech, to protect children or other vulnerable groups, to promote or restrict political or religious views, and to preventslanderandlibel. Specific rules and regulations regarding censorship vary betweenlegal jurisdictionsand/or private organizations. Socrates, while defying attempts by the Athenian state to censor his philosophical teachings, was brought charges that led to his death. The conviction is recorded by Plato: in 399 BC, Socrates went ontrial[8]and was subsequently found guilty of both corrupting the minds of the youth of Athens and ofimpiety(asebeia,[9]"not believing in the gods of the state"),[10]and was sentenced tohemlock.[11][12][13] Socrates' student,Plato, is said to have advocated censorship in his essay onThe Republic, which opposed the existence of democracy. In contrast to Plato, Greek playwrightEuripides(480–406 BC) defended the true liberty of freeborn men, including the right to speak freely. In 1766,Sweden became the first country to abolish censorship by law.[14] Censorship has been criticized throughout history for being unfair and hindering progress.[citation needed]In a 1997 essay on Internet censorship, social commentator Michael Landier explains that censorship is counterproductive as it prevents the censored topic from being discussed. Landier expands his argument by claiming that those who impose censorship must consider what they censor to be true, as individuals believing themselves to be correct would welcome the opportunity to disprove those with opposing views.[15] Censorship is often used to impose moral values on society, as in the censorship of material considered obscene. English novelistE. M. Forsterwas a staunch opponent of censoring material on the grounds that it was obscene or immoral, raising the issue of moral subjectivity and the constant changing of moral values. When the 1928 novelLady Chatterley's Loverwasput on trial in 1960, Forster wrote:[16] Lady Chatterley's Loveris a literary work of importance...I do not think that it could be held obscene, but am in a difficulty here, for the reason that I have never been able to follow the legal definition of obscenity. The law tells me that obscenity may deprave and corrupt, but as far as I know, it offers no definition of depravity or corruption. Proponents have sought to justify it using different rationales for various types of information censored: In wartime, explicit censorship is carried out with the intent of preventing the release of information that might be useful to an enemy. Typically it involves keeping times or locations secret, or delaying the release of information (e.g., an operational objective) until it is of no possible use to enemy forces. The moral issues here are often seen as somewhat different, as the proponents of this form of censorship argue that the release of tactical information usually presents a greater risk of casualties among one's own forces and could possibly lead to loss of the overall conflict.[citation needed] DuringWorld War Iletters written by British soldiers would have to go through censorship. This consisted of officers going through letters with a black marker and crossing out anything which might compromise operational secrecy before the letter was sent.[22]TheWorld War IIcatchphrase "Loose lips sink ships" was used as a common justification to exercise official wartime censorship and encourage individual restraint when sharing potentially sensitive information.[23] An example of "sanitization" policies comes from theUSSRunderJoseph Stalin, where publicly used photographs were often altered to remove people whom Stalin had condemned to execution. Though past photographs may have been remembered or kept, this deliberate and systematic alteration to all of history in the public mind is seen as one of the central themes ofStalinismandtotalitarianism.[citation needed] Censorship is occasionally carried out to aid authorities or to protect an individual, as with some kidnappings when attention and media coverage of the victim can sometimes be seen as unhelpful.[24] Religious censorshipis a form of censorship wherefreedom of expressionis controlled or limited using religious authority or on the basis of the teachings of thereligion.[25]This form of censorship has a long history and is practiced in many societies and by many religions. Examples include theGalileo affair,Edict of Compiègne, theIndex Librorum Prohibitorum(list of prohibited books) and the condemnation ofSalman Rushdie's novelThe Satanic VersesbyIranianleaderAyatollah Ruhollah Khomeini. Images of the Islamic figure Muhammad are also regularly censored. In some secular countries, this is sometimes done to prevent hurting religious sentiments.[26] The content of school textbooks is often an issue of debate, since their target audiences are young people. The termwhitewashingis commonly used to refer to revisionism aimed at glossing over difficult or questionable historical events, or a biased presentation thereof. Thereporting of military atrocities in historyis extremely controversial, as in the case ofthe Holocaust(orHolocaust denial),Bombing of Dresden, theNanking Massacreas found withJapanese history textbook controversies, theArmenian genocide, theTiananmen Square protests of 1989, and theWinter Soldier Investigationof theVietnam War. In the context of secondary school education, the way facts and history are presented greatly influences the interpretation of contemporary thought, opinion and socialization. One argument for censoring the type of information disseminated is based on the inappropriate quality of such material for the younger public. The use of the "inappropriate" distinction is in itself controversial, as it changed heavily. A Ballantine Books version of the bookFahrenheit 451which is the version used by most school classes[27]contained approximately 75 separate edits, omissions, and changes from the original Bradbury manuscript. In February 2006, aNational Geographiccover was censored by theNashravaran Journalistic Institute. The offending cover was about the subject ofloveand a picture of an embracing couple was hidden beneath a white sticker.[28] Economic induced censorship is a type of censorship enacted by economic markets to favor, and disregard, types of information. Economic induced censorship is also caused by market forces which privatize and establish commodification of certain information that is not accessible by the general public, primarily because of the cost associated with commodified information such as academic journals, industry reports and pay to use repositories.[29] The concept was illustrated as a censorship pyramid[30]that was conceptualized by primarilyJulian Assange, along withAndy Müller-Maguhn,Jacob AppelbaumandJérémie Zimmermann, in theCypherpunks (book). Self-censorship is the act of censoring orclassifyingone's own discourse. This is done out of fear of, or deference to, the sensibilities or preferences (actual or perceived) of others and without overt pressure from any specific party or institution of authority. Self-censorship is often practiced byfilm producers,film directors,publishers,news anchors,journalists,musicians, and other kinds ofauthorsincluding individuals who usesocial media.[32] According to aPew Research Centerand theColumbia Journalism Reviewsurvey, "About one-quarter of the local and national journalists say they have purposely avoided newsworthy stories, while nearly as many acknowledge they have softened the tone of stories to benefit the interests of their news organizations. Fully four-in-ten (41%) admit they have engaged in either or both of these practices."[33] Threats to media freedom have shown a significant increase in Europe in recent years, according to a study published in April 2017 by theCouncil of Europe. This results in a fear of physical or psychological violence, and the ultimate result is self-censorship by journalists.[34] Copy approval is the right to read and amend an article, usually an interview, before publication. Many publications refuse to give copy approval but it is increasingly becoming common practice when dealing with publicity anxious celebrities.[35]Picture approval is the right given to an individual to choose which photos will be published and which will not.Robert Redfordis well known for insisting upon picture approval. Writer approval is when writers are chosen based on whether they will write flattering articles or not. Hollywood publicist Pat Kingsley is known for banning certain writers who wrote undesirably about one of her clients from interviewing any of her other clients.[36] Flooding the public, often through onlinesocial networks, with false or misleading information is sometimes called "reverse censorship". American legal scholarTim Wuhas explained that this type of information control, sometimes bystate actors, can "distort or drown out disfavored speech through the creation and dissemination offake news, the payment of fake commentators, and the deployment of propagandarobots."[37] Soft censorshiporindirect censorshipis the practice of influencing news coverage by applying financial pressure on media companies that are deemed critical of a government or its policies and rewarding media outlets and individual journalists who are seen as friendly to the government.[38] Book censorship can be enacted at the national or sub-national level, and can carry legal penalties for their infraction. Books may also be challenged at a local, community level. As a result, books can be removed from schools or libraries, although these bans do not typically extend outside of that area. Aside from the usual justifications of pornography andobscenity, some films are censored due to changing racial attitudes orpolitical correctnessin order to avoidethnic stereotypingand/or ethnic offense despite its historical or artistic value. One example is the still withdrawn "Censored Eleven" series of animated cartoons, which may have been innocent then, but are "incorrect" now.[39] Film censorship is carried out by various countries. Film censorship is achieved by censoring the producer or restricting a state citizen. For example, in China the film industry censorsLGBT-related films. Filmmakers must resort to finding funds from international investors such as the "Ford Foundations" and or produce through an independent film company.[40] Music censorship has been implemented by states, religions, educational systems, families, retailers and lobbying groups – and in most cases they violate international conventions of human rights.[41] Censorship of maps is often employed for military purposes. For example, the technique was used in formerEast Germany, especially for the areas near the border toWest Germanyin order to make attempts of defection more difficult. Censorship of maps is also applied byGoogle Maps, where certain areas are grayed out or blacked or areas are purposely left outdated with old imagery.[42] Art is loved and feared because of its evocative power. Destroying or oppressing art can potentially justify its meaning even more.[43] British photographer and visual artistGraham Ovenden's photos and paintings were ordered to be destroyed by a London's magistrate court in 2015 for being "indecent"[44]and their copies had been removed from the onlineTate gallery.[45] A 1980 Israeli law forbade bannedartworkcomposed of the four colours of thePalestinian flag,[46]and Palestinians were arrested for displaying such artwork or even for carrying sliced melons with the same pattern.[47][48][49] Moath al-Alwiis a Guantanamo Bay prisoner who createsmodel shipsas an expression of art. Alwi does so with the few tools he has at his disposal such as dental floss and shampoo bottles, and he is also allowed to use a small pair of scissors with rounded edges.[50]A few of Alwi's pieces are on display at John Jay College of Criminal Justice in New York. There are also other artworks on display at the College that were created by other inmates. The artwork that is being displayed might be the only way for some of the inmates to communicate with the outside. Recently things have changed though. The military has come up with a new policy that will not allow the artwork at Guantanamo Bay Military Prison to leave the prison. The artwork created by Alwi and other prisoners is now government property and can be destroyed or disposed of in whatever way the government choose, making it no longer the artist's property.[51] Around 300 artists in Cuba are fighting for their artistic freedom due to new censorship rules Cuba's government has in place for artists. In December 2018, following the introduction of new rules that would ban music performances and artwork not authorized by the state,performance artistTania Bruguerawas detained upon arriving to Havana and released after four days.[52] An example of extreme state censorship was the Nazis' requirements of using art as propaganda. Art was only allowed to be used as a political instrument to control people and failure to act in accordance with the censors was punishable by law, even fatal. TheDegenerate Art Exhibitionwas a historical instance of this, the goal of which was to advertise Nazi values and slander others.[53] Internet censorship is control or suppression of the publishing or accessing of information onthe Internet. It may be carried out by governments or by private organizations either at the behest of the government or on their own initiative. Individuals and organizations may engage inself-censorshipon their own or due to intimidation and fear. The issues associated with Internet censorship are similar to those for offline censorship of more traditional media. One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. Thus censors must work to prevent access to information even though they lack physical or legal control over the websites themselves. This in turn requires the use of technical censorship methods that are unique to the Internet, such as site blocking and content filtering.[59] Furthermore, theDomain Name System(DNS) a critical component of the Internet is dominated by centralized and few entities. The most widely used DNS root is administered by theInternet Corporation for Assigned Names and Numbers(ICANN).[60][61]As an administrator they have rights to shut down and seizedomain nameswhen they deem necessary to do so and at most times the direction is from governments. This has been the case withWikileaksshutdowns[62]and name seizure events such as the ones executed by theNational Intellectual Property Rights Coordination Center(IPR Center) managed by theHomeland Security Investigations(HSI).[63]This makes it easy for internet censorship by authorities as they have control over what should or should not be on the Internet. Some activists and researchers have started opting foralternative DNS roots, though the Internet Architecture Board[64](IAB) does not support these DNS root providers. Unless the censor has total control over all Internet-connected computers, such as inNorth KoreaorCuba, total censorship of information is very difficult or impossible to achieve due to the underlying distributed technology of the Internet.Pseudonymityanddata havens(such asFreenet) protectfree speechusing technologies that guarantee material cannot be removed and prevents the identification of authors. Technologically savvy users can often find ways toaccess blocked content. Nevertheless, blocking remains an effective means of limiting access to sensitive information for most users when censors, such as those inChina, are able to devote significant resources to building and maintaining a comprehensive censorship system.[59] Views about the feasibility and effectiveness of Internet censorship have evolved in parallel with the development of the Internet and censorship technologies: ABBC World Service pollof 27,973 adults in 26 countries, including 14,306 Internet users,[68]was conducted between 30 November 2009 and 7 February 2010. The head of the polling organization felt, overall, that the poll showed that: The poll found that nearly four in five (78%) Internet users felt that the Internet had brought them greater freedom, that most Internet users (53%) felt that "the internet should never be regulated by any level of government anywhere", and almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right (50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion).[70] The rising use of social media in many nations has led to the emergence of citizens organizing protests through social media, sometimes called "Twitter Revolutions". The most notable of these social media-led protests were theArab Spring uprisings, starting in 2010. In response to the use of social media in these protests, the Tunisian government began a hack of Tunisian citizens' Facebook accounts, and reports arose of accounts being deleted.[71] Automated systems can be used to censorsocial mediaposts, and therefore limit what citizens can say online. This most notably occurs inChina, where social media posts are automatically censored depending on content. In 2013, Harvard political science professorGary Kingled a study to determine what caused social media posts to be censored and found that posts mentioning the government were not more or less likely to be deleted if they were supportive or critical of the government. Posts mentioning collective action were more likely to be deleted than those that had not mentioned collective action.[72]Currently, social media censorship appears primarily as a way to restrict Internet users' ability to organize protests. For the Chinese government, seeing citizens unhappy with local governance is beneficial as state and national leaders can replace unpopular officials. King and his researchers were able to predict when certain officials would be removed based on the number of unfavorable social media posts.[73] Research has proved that criticism is tolerable on social media sites, therefore it is not censored unless it has a higher chance of collective action. It is not important whether the criticism is supportive or unsupportive of the states' leaders, the main priority of censoring certain social media posts is to make sure that no big actions are being made due to something that was said on the internet. Posts that challenge the Party's political leading role in the Chinese government are more likely to be censored due to the challenges it poses to the Chinese Communist Party.[74] In December 2022Elon Musk, owner and CEO ofTwitterreleased internal documents from the social media microblogging site to journalistsMatt Taibbi,Michael ShellenbergerandBari Weiss. The analysis of these files on Twitter, collectively called, theTwitter Files, explored the content moderation and visibility filtering carried out in collaboration with theFederal Bureau of Investigationon theHunter Biden laptop controversy. On the platform TikTok, certain hashtags have been categorized by the platform's code and determines how viewers can or cannot interact with the content or hashtag specifically. Some shadowbanned tags include: #acab, #GayArab, #gej due to their referencing of certain social movements and LGBTQ identity. As TikTok guidelines are becoming more localized around the world, some experts believe that this could result in more censorship than before.[75] Since the early 1980s, advocates of video games have emphasized their use as anexpressive medium, arguing for their protection under the laws governingfreedom of speechand also as an educational tool. Detractors argue that video games areharmfuland therefore should besubject to legislative oversight and restrictions. Many video games have certain elements removed or edited due toregional rating standards.[76][77]For example, in the Japanese and PAL Versions ofNo More Heroes, blood splatter and gore is removed from the gameplay. Decapitation scenes are implied, but not shown. Scenes of missing body parts after having been cut off, are replaced with the same scene, but showing the body parts fully intact.[78] Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some form of surveillance.[79]Even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can have a "chilling effect" and lead to self-censorship.[80] The former Soviet Union maintained a particularly extensive program of state-imposed censorship. The main organ for official censorship in the Soviet Union was theChief Agency for Protection of Military and State Secretsgenerally known as theGlavlit, its Russian acronym. TheGlavlithandled censorship matters arising from domestic writings of just about any kind – even beer and vodka labels.Glavlitcensorship personnel were present in every large Soviet publishing house or newspaper; the agency employed some 70,000 censors to review information before it was disseminated by publishing houses, editorial offices, and broadcasting studios. No mass medium escapedGlavlit's control. All press agencies and radio and television stations hadGlavlitrepresentatives on their editorial staffs.[81] Sometimes, public knowledge of the existence of a specific document is subtly suppressed, a situation resembling censorship. The authorities taking such action will justify it by declaring the work to be "subversive" or "inconvenient". An example isMichel Foucault's 1978 textSexual Morality and the Law(later republished asThe Danger of Child Sexuality), originally published asLa loi de la pudeur[literally, "the law of decency"]. This work defends the decriminalization ofstatutory rapeand theabolition of age of consent laws.[citation needed] When a publisher comes under pressure to suppress a book, but has already entered into a contract with the author, they will sometimes effectively censor the book by deliberately ordering a small print run and making minimal, if any, attempts to publicize it. This practice became known in the early 2000s asprivishing(private publishing).[82]anOpenNet Initiative(ONI) classifications:[83] Censorship for individual countries is measured by Freedom House (FH)Freedom of the Pressreport,[84]Reporters Without Borders (RWB)Press freedom index[85]andV-Demgovernment censorship effort index. Censorship aspects are measured by Freedom on the Net[54]andOpenNet Initiative(ONI) classifications.[83]Censorship by countrycollects information on censorship,internet censorship,press freedom,freedom of speech, andhuman rightsby country and presents it in a sortable table, together with links to articles with more information. In addition to countries, the table includes information on former countries, disputed countries, political sub-units within countries, and regional organizations. InFrench-speaking Belgium, politicians consideredfar-rightarebannedfrom live media appearances such as interviews or debates.[86][87] Very little is formally censored in Canada, aside from "obscenity" (as defined in the landmark criminal case ofR v Butler) which is generally limited topornographyandchild pornographydepicting and/or advocating non-consensual sex, sexual violence, degradation, or dehumanization, in particular that which causes harm (as inR v Labaye). Most films are simply subject to classification by theBritish Columbia Film Classification Officeunder the non-profitCrown corporationby the name ofConsumer Protection BC, whose classifications are officially used by the provinces ofBritish Columbia,Saskatchewan,Ontario, andManitoba.[88] Cuban media used to be operated under the supervision of theCommunist Party'sDepartment of Revolutionary Orientation, which "develops and coordinates propaganda strategies".[89]Connection to the Internet is restricted and censored.[90] ThePeople's Republic of Chinaemploys sophisticated censorship mechanisms, referred to as theGolden Shield Project, to monitor the internet. Popular search engines such asBaidualso remove politically sensitive search results.[91][92][93] Strict censorship existed in the Eastern Bloc.[94]Throughout the bloc, the various ministries of culture held a tight rein on their writers.[95]Cultural products there reflected the propaganda needs of the state.[95]Party-approved censors exercised strict control in the early years.[96]In the Stalinist period, even the weather forecasts were changed if they suggested that the sun might not shine onMay Day.[96]UnderNicolae CeauşescuinRomania, weather reports were doctored so that the temperatures were not seen to rise above or fall below the levels which dictated that work must stop.[96] Possession and use ofcopying machineswas tightly controlled in order to hinder the production and distribution ofsamizdat, illegalself-publishedbooks and magazines. Possession of even a single samizdat manuscript such as a book byAndrei Sinyavskywas a serious crime which might involve a visit from theKGB. Another outlet for works which did not find favor with the authorities was publishing abroad. Amid declining car sales in 2020, France banned a television ad by a Dutch bike company, saying the ad "unfairly discredited the automobile industry".[97] TheConstitution of Indiaguaranteesfreedom of expression, but placescertain restrictionson content, with a view towards maintaining communal and religious harmony, given the history of communal tension in the nation.[98]According to the Information Technology Rules 2011, objectionable content includes anything that "threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign states or public order".[99]Notably many pornographic websites are blocked in India. Iraq underBaathistSaddam Husseinhad much the same techniques of press censorship as did Romania under Nicolae Ceauşescu but with greater potential violence.[100] During theGHQoccupation of Japan after WW2, any criticism of the Allies' pre-war policies, the SCAP, the Far East Military Tribunal, the inquiries against the United States and every direct and indirect references to the role played by the Allied High Command in drafting Japan's new constitution or to censorship of publications, movies, newspapers and magazines was subject to massive censorship,purges,media blackout.[101] In the four years (September 1945–November 1949) since theCCDwas active, 200 million pieces of mail and 136 million telegrams were opened, and telephones were tapped 800,000 times. Since no criticism of the occupying forces for crimes such as the dropping of the atomic bomb, rape and robbery by US soldiers was allowed, a strict check was carried out. Those who got caught were put on a blacklist called the watchlist, and the persons and the organizations to which they belonged were investigated in detail, which made it easier to dismiss or arrest the "disturbing molecule".[102] Under subsection 48(3) and (4) of thePenangIslamic Religious Administration Enactment 2004, non-Muslims inMalaysiaare penalized for using the following words, or to write or publish them, in any form, version or translation in any language or for use in any publicity material in any medium:"Allah", "Firman Allah", "Ulama", "Hadith", "Ibadah", "Kaabah", "Qadhi'", "Illahi", "Wahyu", "Mubaligh", "Syariah", "Qiblat", "Haji", "Mufti", "Rasul", "Iman", "Dakwah", "Wali", "Fatwa", "Imam", "Nabi", "Sheikh", "Khutbah", "Tabligh", "Akhirat", "Azan", "Al Quran", "As Sunnah", "Auliya'", "Karamah", "False Moon God", "Syahadah", "Baitullah", "Musolla", "Zakat Fitrah", "Hajjah", "Taqwa" and "Soleh".[103][104][105] On 4 March 2022, Russian PresidentVladimir Putinsigned into law a bill introducingprison sentences of up to 15 yearsfor those who publish "knowingly false information" about the Russian military and its operations, leading to some media outlets in Russia to stop reporting on Ukraine or shutting their media outlet.[106][107]Although the1993 Russian Constitutionhas an article expressly prohibitingcensorship,[108]the Russian censorship apparatusRoskomnadzorordered the country's media to only use information from Russian state sources or face fines and blocks.[109]As of December 2022, more than 4,000 people were prosecuted under "fake news" laws in connection with theRussian invasion of Ukraine.[110] Novaya Gazeta'seditor-in-chiefDmitry Muratovwas awarded the2021 Nobel Peace Prizefor his "efforts to safeguard freedom of expression". In March 2022,Novaya Gazetasuspended its print activities after receiving a second warning fromRoskomnadzor.[111] According to Christian Mihr, executive director ofReporters Without Borders, "censorship in Serbia is neither direct nor transparent, but is easy to prove."[112]According to Mihr there are numerous examples of censorship and self-censorship in Serbia[112]According to Mihr, Serbian prime ministerAleksandar Vučićhas proved "very sensitive to criticism, even on critical questions," as was the case with Natalija Miletic, a correspondent forDeutsche Welle Radio, who questioned him in Berlin about the media situation in Serbia and about allegations that some ministers in the Serbian government had plagiarized their diplomas, and who later received threats and offensive articles on the Serbian press.[112] Multiple news outlets have accused Vučić of anti-democratic strongman tendencies.[113][114][115][116][117]In July 2014, journalists associations were concerned about the freedom of the media in Serbia, in which Vučić came under criticism.[118][119] In September 2015 five members of United States Congress (Edie Bernice Johnson, Carlos Curbelo, Scott Perry, Adam Kinzinger, andZoe Lofgren) have informed Vice President of the United StatesJoseph Bidenthat Aleksandar's brother, Andrej Vučić, is leading a group responsible for deteriorating media freedom inSerbia.[120] In theRepublic of Singapore, Section 33 of the Films Act originally banned the making, distribution and exhibition of "party political films", at the pain of a fine not exceeding $100,000 or imprisonment for a term not exceeding two years.[121]The Act further defines a "party political film" as any film or video In 2001, the short documentary calledA Vision of Persistenceon opposition politicianJ. B. Jeyaretnamwas also banned for being a "party political film". The makers of the documentary, all lecturers at the Ngee Ann Polytechnic, later submitted written apologies and withdrew the documentary from being screened at the 2001Singapore International Film Festivalin April, having been told they could be charged in court.[122]Another short documentary calledSingapore RebelbyMartyn See, which documentedSingapore Democratic Partyleader DrChee Soon Juan's acts of civil disobedience, was banned from the 2005Singapore International Film Festivalon the same grounds and See is being investigated for possible violations of the Films Act.[123] This law, however, is often disregarded when such political films are made supporting the rulingPeople's Action Party(PAP).Channel NewsAsia's five-part documentary series on Singapore's PAP ministers in 2005, for example, was not considered a party political film.[124] Exceptions are also made when political films are made concerning political parties of other nations. Films such asMichael Moore's 2004 documentaryFahrenheit 911are thus allowed to screen regardless of the law.[125] Since March 2009, the Films Act has been amended to allow party political films as long as they were deemed factual and objective by a consultative committee. Some months later, this committee lifted the ban on Singapore Rebel.[126] Independent journalism did not exist in theSoviet UnionuntilMikhail Gorbachevbecame its leader. Gorbachev adoptedglasnost(openness), political reform aimed at reducing censorship; before glasnost all reporting was directed by theCommunist Partyor related organizations.Pravda, the predominant newspaper in the Soviet Union, had a monopoly. Foreign newspapers were available only if they were published bycommunist partiessympathetic to the Soviet Union. Online access to all language versions ofWikipediawas blocked inTurkeyon 29 April 2017 byErdoğan's government.[127] Article 299of the Turkish Penal Code deems it illegal to "Insult thePresident of Turkey".A person who is sentenced for a violation of this article can be sentenced to a prison term between one and four years and if the violation was made in public the verdict can be elevated by a sixth.[128]Prosecutions often target critics of the government, independent journalists, and political cartoonists.[129]Between 2014 and 2019, 128,872 investigations were launched for this offense and prosecutors opened 27,717 criminal cases.[130] From December 1956 until 1974 theIrish republicanpolitical partySinn Féinwas banned from participating in elections by the Northern Ireland Government.[131]From 1988 until 1994 the British government prevented the UK media from broadcasting the voices (but not words) of Sinn Féin and ten Irish republican andUlster loyalistgroups.[132] In the United States, most forms of censorship are self-imposed rather than enforced by the government. The government does not routinely censor material, although state and local governments often restrict what is provided in libraries and public schools.[133]In addition, distribution, receipt, and transmission (but notmere private possession) ofobscene materialmay be prohibited by law. Furthermore, underFCC v. Pacifica Foundation, the FCC has the power to prohibit the transmission of indecent material over broadcast. Additionally, critics ofcampaign finance reform in the United Statessay this reform imposes widespread restrictions on political speech.[134][135] In 1973, a military coup took power in Uruguay, and the State practiced censorship. For example, writerEduardo Galeanowas imprisoned and later was forced to flee. His bookOpen Veins of Latin Americawas banned by the right-wing military government, not only in Uruguay, but also in Chile and Argentina.[136]
https://en.wikipedia.org/wiki/Censorship
SIGIRis theAssociation for Computing Machinery'sSpecial Interest Group on Information Retrieval. The scope of the group's specialty is the theory and application of computers to the acquisition, organization, storage,retrievaland distribution of information; emphasis is placed on working with non-numeric information, ranging fromnatural languageto highly structureddata bases. The annual international SIGIR conference, which began in 1978, is considered the most important in the field of information retrieval. SIGIR also sponsors the annualJoint Conference on Digital Libraries(JCDL) in association withSIGWEB, theConference on Information and Knowledge Management(CIKM), and theInternational Conference on Web Search and Data Mining(WSDM) in association withSIGKDD,SIGMOD, andSIGWEB. The group gives out several awards to contributions to the field of information retrieval. The most important award is theGerard Salton Award(named after thecomputer scientistGerard Salton), which is awarded every three years to an individual who has made "significant, sustained and continuing contributions to research ininformation retrieval". Additionally, SIGIR presents a Best Paper Award[1]to recognize the highest quality paper at each conference. "Test of time" Award[2]is a recent award that is given to a paper that has had "long-lasting influence, including impact on a subarea of information retrieval research, across subareas of information retrieval research, and outside of the information retrieval research community". This award is selected from a set of full papers presented at the main SIGIR conference 10–12 years before. TheACM SIGIR Academy[3][4]is a group of researchers honored by SIGIR. Each year, 3-5 new members are elected (in addition to other "very senior members of the IR community" who will be "automatically" inducted) for having made significant, cumulative contributions to the development of the field ofinformation retrievaland influencing the research of others. These are the principal leaders of the field, whose efforts have shaped the discipline and/or industry through significant research, innovation, and/or service. Here are the inductees into the SIGIR Academy by year:
https://en.wikipedia.org/wiki/Special_Interest_Group_on_Information_Retrieval
In computer science,function-levelprogramming refers to one of the two contrastingprogramming paradigmsidentified byJohn Backusin his work on programs as mathematical objects, the other beingvalue-level programming. In his 1977Turing Awardlecture, Backus set forth what he considered to be the need to switch to a different philosophy in programming language design:[1] Programming languages appear to be in trouble. Each successive language incorporates, with a little cleaning up, all the features of its predecessors plus a few more. [...] Each new language claims new and fashionable features... but the plain fact is that few languages make programming sufficiently cheaper or more reliable to justify the cost of producing and learning to use them. He designedFPto be the firstprogramming languageto specifically support the function-level programming style. Afunction-levelprogram isvariable-free(cf.point-freeprogramming), sinceprogram variables, which are essential in value-level definitions, are not needed in function-level programs. In the function-level style of programming, a program is built directly from programs that are given at the outset, by combining them withprogram-forming operationsorfunctionals. Thus, in contrast with the value-level approach that applies the given programs to values to form asuccession of valuesculminating in the desired result value, the function-level approach applies program-forming operations to the given programs to form asuccession of programsculminating in the desired result program. As a result, the function-level approach to programming invites study of thespace of programs under program-forming operations, looking to derive useful algebraic properties of these program-forming operations. The function-level approach offers the possibility of making the set of programs amathematical spaceby emphasizing the algebraic properties of the program-forming operations over thespace of programs. Another potential advantage of the function-level view is the ability to use onlystrict functionsand thereby havebottom-up semantics, which are the simplest kind of all. Yet another is the existence of function-level definitions that are not thelifted(that is,liftedfrom a lower value-level to a higher function-level) image of any existing value-level one: these (often terse) function-level definitions represent a more powerful style of programming not available at the value-level. When Backus studied and publicized his function-level style of programming, his message was mostly misunderstood[2]as supporting the traditionalfunctional programmingstyle languages instead of his ownFPand its successorFL. Backus calls functional programmingapplicative programming;[clarification needed]his function-level programming is a particular, constrained type. A key distinction from functional languages is that Backus' language has the following hierarchy of types: ...and the only way to generate new functions is to use one of the functional forms, which are fixed: you cannot build your own functional form (at least not within FP; you can within FFP (Formal FP)). This restriction means that functions in FP are amodule(generated by the built-in functions) over the algebra of functional forms, and are thus algebraically tractable. For instance, the general question of equality of two functions is equivalent to thehalting problem, and is undecidable, but equality of two functions in FP is just equality in the algebra, and thus (Backus imagines) easier. Even today, many users oflambda stylelanguages often misinterpret Backus' function-level approach as a restrictive variant of the lambda style, which is ade factovalue-level style. In fact, Backus would not have disagreed with the 'restrictive' accusation: he argued that it waspreciselydue to such restrictions that a well-formed mathematical space could arise, in a manner analogous to the waystructured programminglimits programming to arestrictedversion of all the control-flow possibilities available in plain, unrestrictedunstructured programs. The value-free style of FP is closely related to the equational logic of acartesian-closed category. The canonical function-level programming language isFP. Others includeFL, andJ.
https://en.wikipedia.org/wiki/Function-level_programming
TheLRE Map(Language Resources and Evaluation) is a freely accessible large database on resources dedicated toNatural language processing. The original feature of LRE Map is that the records are collected during the submission of different majorNatural language processingconferences. The records are then cleaned and gathered into a global database called "LRE Map".[1] The LRE Map is intended to be an instrument for collecting information about language resources and to become, at the same time, a community for users, a place to share and discover resources, discuss opinions, provide feedback, discover new trends, etc. It is an instrument for discovering, searching and documenting language resources, here intended in a broad sense, as both data and tools. The large amount of information contained in the Map can be analyzed in many different ways. For instance, the LRE Map can provide information about the most frequent type of resource, the most represented language, the applications for which resources are used or are being developed, the proportion of new resources vs. already existing ones, or the way in which resources are distributed to the community. Several institutions worldwide maintain catalogues of language resources (ELRA,LDC,NICTUniversal Catalogue,ACLData and Code Repository,OLAC, LT World, etc.)[2]However, it has been estimated that only 10% of existing resources are known, either through distribution catalogues or via direct publicity by providers (web sites and the like). The rest remains hidden, the only occasions where it briefly emerges being when a resource is presented in the context of a research paper or report at some conference. Even in this case, nevertheless, it might be that a resource remains in the background simply because the focus of the research is not on the resourceper se. The LRE Map originated under the name "LREC Map" during the preparation ofLREC2010 conference.[3]More specifically, the idea was discussed within the FlaReNet project, and in collaboration withELRAand theInstitute of Computational Linguistics of CNR in Pisa, the Map was put in place at LREC 2010.[4]The LREC organizers asked the authors to provide some basic information about all the resources (in a broad sense, i.e. including tools, standards and evaluation packages), either used or created, described in their papers. All these descriptors were then gathered in a global matrix called the LREC Map. The same methodology and requirements from the authors has been then applied and extended to other conferences, namely COLING-2010,[5]EMNLP-2010,[6]RANLP-2011,[7]LREC 2012,[8]LREC 2014[9]and LREC 2016.[10]After this generalization to other conferences, the LREC Map has been renamed as theLRE Map. The size of the database increases over time. The data collected amount to 4776 entries. Each resource is described according to the following attributes: The LRE map is a very important tool to chart the NLP field. Compared to other studied based on subjective scorings, the LRE map is made of real facts. The map has a great potential for many uses, in addition to being an information gathering tool: The data were then cleaned and sorted byJoseph Mariani(CNRS-LIMSI IMMI) andGil Francopoulo(CNRS-LIMSI IMMI + Tagmatica) in order to compute the various matrices of the final FLaReNet[11]reports. One of them, the matrix for written data at LREC 2010 is as follows: English is the most studied language. Secondly, come French and German languages and then Italian and Spanish. The LRE Map has been extended to Language Resources and Evaluation Journal[12]and other conferences.
https://en.wikipedia.org/wiki/LRE_Map
Inmathematics, specificallymeasure theory, acomplex measuregeneralizes the concept ofmeasureby letting it havecomplexvalues.[1]In other words, one allows forsetswhose size (length, area, volume) is acomplex number. Formally, acomplex measureμ{\displaystyle \mu }on ameasurable space(X,Σ){\displaystyle (X,\Sigma )}is a complex-valuedfunction that issigma-additive. In other words, for anysequence(An)n∈N{\displaystyle (A_{n})_{n\in \mathbb {N} }}ofdisjoint setsbelonging toΣ{\displaystyle \Sigma }, one has As⋃n=1∞An=⋃n=1∞Aσ(n){\displaystyle \displaystyle \bigcup _{n=1}^{\infty }A_{n}=\bigcup _{n=1}^{\infty }A_{\sigma (n)}}for anypermutation(bijection)σ:N→N{\displaystyle \sigma :\mathbb {N} \to \mathbb {N} }, it follows that∑n=1∞μ(An){\displaystyle \displaystyle \sum _{n=1}^{\infty }\mu (A_{n})}converges unconditionally(hence, sinceC{\displaystyle \mathbb {C} }is finite dimensional,μ{\displaystyle \mu }converges absolutely). One can define theintegralof a complex-valuedmeasurable functionwith respect to a complex measure in the same way as theLebesgue integralof areal-valued measurable function with respect to anon-negative measure, by approximating a measurable function withsimple functions.[2]Just as in the case of ordinary integration, this more general integral might fail to exist, or its value might be infinite (thecomplex infinity). Another approach is to not develop a theory of integration from scratch, but rather use the already available concept of integral of areal-valued functionwith respect to a non-negative measure.[3]To that end, it is a quick check that the real and imaginary parts μ1and μ2of a complex measure μ are finite-valuedsigned measures. One can apply theHahn-Jordan decompositionto these measures to split them as and where μ1+, μ1−, μ2+, μ2−are finite-valued non-negative measures (which are unique in some sense). Then, for a measurable functionfwhich isreal-valuedfor the moment, one can define as long as the expression on the right-hand side is defined, that is, all four integrals exist and when adding them up one does not encounter theindeterminate∞−∞.[3] Given now acomplex-valuedmeasurable function, one can integrate its real and imaginary components separately as illustrated above and define, as expected, For a complex measure μ, one defines itsvariation, orabsolute value, |μ| by the formula whereAis in Σ and thesupremumruns over all sequences ofdisjoint sets(An)nwhoseunionisA. Taking only finite partitions of the setAintomeasurable subsets, one obtains an equivalent definition. It turns out that |μ| is a non-negative finite measure. In the same way as a complex number can be represented in apolar form, one has apolar decompositionfor a complex measure: There exists a measurable function θ with real values such that meaning for anyabsolutely integrablemeasurable functionf, i.e.,fsatisfying One can use theRadon–Nikodym theoremto prove that the variation is a measure and the existence of thepolar decomposition. The sum of two complex measures is a complex measure, as is the product of a complex measure by a complex number. That is to say, the set of all complex measures on a measure space (X, Σ) forms avector spaceover the complex numbers. Moreover, thetotal variation‖⋅‖{\displaystyle \|\cdot \|}defined as is anorm, with respect to which the space of complex measures is aBanach space.
https://en.wikipedia.org/wiki/Complex_measure#Variation_of_a_complex_measure_and_polar_decomposition
Hacking backis a technique to countercybercrimeby hacking the computing devices of the attacker. The effectiveness[1][2][3]and ethics of hacking back are disputed.[4] It is also very disputed if it islegalor not, however both participating parties can still beprosecutedfor theircrimes. There was a bill proposed in 2017 to make this possible, ended consideration in 2019. In 2022 it reappeared.[clarification needed] Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Hacking_back
Incomputing,online analytical processing (OLAP)(/ˈoʊlæp/), is an approach to quickly answermulti-dimensional analytical(MDA) queries.[1]The termOLAPwas created as a slight modification of the traditional database termonline transaction processing(OLTP).[2]OLAP is part of the broader category ofbusiness intelligence, which also encompassesrelational databases, report writing anddata mining.[3]Typical applications of OLAP includebusiness reportingfor sales,marketing, management reporting,business process management(BPM),[4]budgetingandforecasting,financial reportingand similar areas, with new applications emerging, such asagriculture.[5] OLAP tools enable users to analyse multidimensional data interactively from multiple perspectives. OLAP consists of three basic analytical operations: consolidation (roll-up), drill-down, and slicing and dicing.[6]: 402–403Consolidation involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends. By contrast, the drill-down is a technique that allows users to navigate through the details. For instance, users can view the sales by individual products that make up a region's sales. Slicing and dicing is a feature whereby users can take out (slicing) a specific set of data of theOLAP cubeand view (dicing) the slices from different viewpoints. These viewpoints are sometimes called dimensions (such as looking at the same sales by salesperson, or by date, or by customer, or by product, or by region, etc.). Databasesconfigured for OLAP use a multidimensional data model, allowing for complex analytical andad hocqueries with a rapid execution time.[7]They borrow aspects ofnavigational databases,hierarchical databasesand relational databases. OLAP is typically contrasted toOLTP(online transaction processing), which is generally characterized by much less complex queries, in a larger volume, to process transactions rather than for the purpose of business intelligence or reporting. Whereas OLAP systems are mostly optimized for read, OLTP has to process all kinds of queries (read, insert, update and delete). At the core of any OLAP system is anOLAP cube(also called a 'multidimensional cube' or ahypercube). It consists of numeric facts calledmeasuresthat are categorized bydimensions. The measures are placed at the intersections of the hypercube, which is spanned by the dimensions as avector space. The usual interface to manipulate an OLAP cube is a matrix interface, likePivot tablesin a spreadsheet program, which performs projection operations along the dimensions, such as aggregation or averaging. The cube metadata is typically created from astar schemaorsnowflake schemaorfact constellationof tables in arelational database. Measures are derived from the records in thefact tableand dimensions are derived from thedimension tables. Eachmeasurecan be thought of as having a set oflabels, or meta-data associated with it. Adimensionis what describes theselabels; it provides information about themeasure. A simple example would be a cube that contains a store's sales as ameasure, and Date/Time as adimension. Each Sale has a Date/Timelabelthat describes more about that sale. For example: Multidimensional structure is defined as "a variation of the relational model that uses multidimensional structures to organize data and express the relationships between data".[6]: 177The structure is broken into cubes and the cubes are able to store and access data within the confines of each cube. "Each cell within a multidimensional structure contains aggregated data related to elements along each of its dimensions".[6]: 178Even when data is manipulated it remains easy to access and continues to constitute a compact database format. The data still remains interrelated. Multidimensional structure is quite popular for analytical databases that use online analytical processing (OLAP) applications.[6]Analytical databases use these databases because of their ability to deliver answers to complex business queries swiftly. Data can be viewed from different angles, which gives a broader perspective of a problem unlike other models.[8] It has been claimed that for complex queries OLAP cubes can produce an answer in around 0.1% of the time required for the same query onOLTPrelational data.[9][10]The most important mechanism in OLAP which allows it to achieve such performance is the use ofaggregations. Aggregations are built from the fact table by changing the granularity on specific dimensions and aggregating up data along these dimensions, using anaggregate function(oraggregation function). The number of possible aggregations is determined by every possible combination of dimension granularities. The combination of all possible aggregations and the base data contains the answers to every query which can be answered from the data.[11] Because usually there are many aggregations that can be calculated, often only a predetermined number are fully calculated; the remainder are solved on demand. The problem of deciding which aggregations (views) to calculate is known as the view selection problem. View selection can be constrained by the total size of the selected set of aggregations, the time to update them from changes in the base data, or both. The objective of view selection is typically to minimize the average time to answer OLAP queries, although some studies also minimize the update time. View selection isNP-complete. Many approaches to the problem have been explored, includinggreedy algorithms, randomized search,genetic algorithmsandA* search algorithm. Some aggregation functions can be computed for the entire OLAP cube byprecomputingvalues for each cell, and then computing the aggregation for a roll-up of cells by aggregating these aggregates, applying adivide and conquer algorithmto the multidimensional problem to compute them efficiently.[12]For example, the overall sum of a roll-up is just the sum of the sub-sums in each cell. Functions that can be decomposed in this way are calleddecomposable aggregation functions, and includeCOUNT, MAX, MIN,andSUM, which can be computed for each cell and then directly aggregated; these are known as self-decomposable aggregation functions.[13] In other cases, the aggregate function can be computed by computing auxiliary numbers for cells, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples includeAVERAGE(tracking sum and count, dividing at the end) andRANGE(tracking max and min, subtracting at the end). In other cases, the aggregate function cannot be computed without analyzing the entire set at once, though in some cases approximations can be computed; examples includeDISTINCT COUNT, MEDIAN,andMODE; for example, the median of a set is not the median of medians of subsets. These latter are difficult to implement efficiently in OLAP, as they require computing the aggregate function on the base data, either computing them online (slow) or precomputing them for possible rollouts (large space). OLAP systems have been traditionally categorized using the following taxonomy.[14] MOLAP (multi-dimensional online analytical processing) is the classic form of OLAP and is sometimes referred to as just OLAP. MOLAP stores this data in an optimized multi-dimensional array storage, rather than in a relational database. Some MOLAP tools require thepre-computationand storage of derived data, such as consolidations – the operation known as processing. Such MOLAP tools generally utilize a pre-calculated data set referred to as adata cube. The data cube contains all the possible answers to a given range of questions. As a result, they have a very fast response to queries. On the other hand, updating can take a long time depending on the degree of pre-computation. Pre-computation can also lead to what is known as data explosion. Other MOLAP tools, particularly those that implement thefunctional database modeldo not pre-compute derived data but make all calculations on demand other than those that were previously requested and stored in a cache. Advantages of MOLAP Disadvantages of MOLAP Examples of commercial products that use MOLAP areCognosPowerplay,Oracle Database OLAP Option,MicroStrategy,Microsoft Analysis Services,Essbase,TM1,Jedox, and icCube. ROLAPworks directly with relational databases and does not require pre-computation. The base data and the dimension tables are stored as relational tables and new tables are created to hold the aggregated information. It depends on a specialized schema design. This methodology relies on manipulating the data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement. ROLAP tools do not use pre-calculated data cubes but instead pose the query to the standard relational database and its tables in order to bring back the data required to answer the question. ROLAP tools feature the ability to ask any question because the methodology is not limited to the contents of a cube. ROLAP also has the ability to drill down to the lowest level of detail in the database. While ROLAP uses a relational database source, generally the database must be carefully designed for ROLAP use. A database which was designed forOLTPwill not function well as a ROLAP database. Therefore, ROLAP still involves creating an additional copy of the data. However, since it is a database, a variety of technologies can be used to populate the database. In the OLAP industry ROLAP is usually perceived as being able to scale for large data volumes but suffering from slower query performance as opposed toMOLAP. TheOLAP Survey[usurped], the largest independent survey across all major OLAP products, being conducted for 6 years (2001 to 2006) have consistently found that companies using ROLAP report slower performance than those using MOLAP even when data volumes were taken into consideration. However, as with any survey there are a number of subtle issues that must be taken into account when interpreting the results. Some companies select ROLAP because they intend to re-use existing relational database tables—these tables will frequently not be optimally designed for OLAP use. The superior flexibility of ROLAP tools allows this less-than-optimal design to work, but performance suffers.MOLAPtools in contrast would force the data to be re-loaded into an optimal OLAP design. The undesirable trade-off between additionalETLcost and slow query performance has ensured that most commercial OLAP tools now use a "Hybrid OLAP" (HOLAP) approach, which allows the model designer to decide which portion of the data will be stored inMOLAPand which portion in ROLAP. There is no clear agreement across the industry as to what constitutes "Hybrid OLAP", except that a database will divide data between relational and specialized storage.[15]For example, for some vendors, a HOLAP database will use relational tables to hold the larger quantities of detailed data and use specialized storage for at least some aspects of the smaller quantities of more-aggregate or less-detailed data. HOLAP addresses the shortcomings ofMOLAPandROLAPby combining the capabilities of both approaches. HOLAP tools can utilize both pre-calculated cubes and relational data sources. In this mode HOLAP storesaggregationsinMOLAPfor fast query performance, and detailed data inROLAPto optimize time of cubeprocessing. In this mode HOLAP stores some slice of data, usually the more recent one (i.e. sliced by Time dimension) inMOLAPfor fast query performance, and older data inROLAP. Moreover, we can store some dices inMOLAPand others inROLAP, leveraging the fact that in a large cuboid, there will be dense and sparse subregions.[16] The first product to provide HOLAP storage wasHolos, but the technology also became available in other commercial products such asMicrosoft Analysis Services,Oracle Database OLAP Option,MicroStrategyandSAP AGBI Accelerator. The hybrid OLAP approach combines ROLAP and MOLAP technology, benefiting from the greater scalability of ROLAP and the faster computation of MOLAP. For example, a HOLAP server may store large volumes of detailed data in a relational database, while aggregations are kept in a separate MOLAP store. The Microsoft SQL Server 7.0 OLAP Services supports a hybrid OLAP server Each type has certain benefits, although there is disagreement about the specifics of the benefits between providers. The following acronyms are also sometimes used, although they are not as widespread as the ones above: Unlikerelational databases, which had SQL as the standard query language, and widespreadAPIssuch asODBC,JDBCandOLEDB, there was no such unification in the OLAP world for a long time. The first real standard API wasOLE DB for OLAPspecification fromMicrosoftwhich appeared in 1997 and introduced theMDXquery language. Several OLAP vendors – both server and client – adopted it. In 2001 Microsoft andHyperionannounced theXML for Analysisspecification, which was endorsed by most of the OLAP vendors. Since this also used MDX as a query language, MDX became the de facto standard.[26]Since September-2011LINQcan be used to querySSASOLAP cubes from Microsoft .NET.[27] The first product that performed OLAP queries wasExpress,which was released in 1970 (and acquired byOraclein 1995 from Information Resources).[28]However, the term did not appear until 1993 when it was coined byEdgar F. Codd, who has been described as "the father of the relational database". Codd's paper[1]resulted from a short consulting assignment which Codd undertook for former Arbor Software (laterHyperion Solutions, and in 2007 acquired by Oracle), as a sort of marketing coup. The company had released its own OLAP product,Essbase, a year earlier. As a result, Codd's "twelve laws of online analytical processing" were explicit in their reference to Essbase. There was some ensuing controversy and when Computerworld learned that Codd was paid by Arbor, it retracted the article. The OLAP market experienced strong growth in the late 1990s with dozens of commercial products going into market. In 1998, Microsoft released its first OLAP Server –Microsoft Analysis Services, which drove wide adoption of OLAP technology and moved it into the mainstream. OLAP clients include many spreadsheet programs like Excel, web application, SQL, dashboard tools, etc. Many clients support interactive data exploration where users select dimensions and measures of interest. Some dimensions are used as filters (for slicing and dicing the data) while others are selected as the axes of a pivot table or pivot chart. Users can also vary aggregation level (for drilling-down or rolling-up) the displayed view. Clients can also offer a variety of graphical widgets such as sliders, geographic maps, heat maps and more which can be grouped and coordinated as dashboards. An extensive list of clients appears in the visualization column of thecomparison of OLAP serverstable. Below is a list of top OLAP vendors in 2006, with figures in millions ofUS Dollars.[29]
https://en.wikipedia.org/wiki/ROLAP
ThePtolemy Projectis an ongoing project aimed at modeling, simulating, and designingconcurrent,real-time,embedded systems. The focus of the Ptolemy Project is on assembling concurrent components. The principal product of the project is the Ptolemy IImodel based designand simulation tool. The Ptolemy Project is conducted in the Industrial Cyber-Physical Systems Center (iCyPhy) in the Department of Electrical Engineering and Computer Sciences of theUniversity of California at Berkeley, and is directed by Prof.Edward A. Lee. The key underlying principle in the project is the use of well-definedmodels of computationthat govern the interaction between components. A major problem area being addressed is the use of heterogeneous mixtures of models of computation.[1] The project is named afterClaudius Ptolemaeus, the 2nd century Greek astronomer, mathematician, and geographer. The Kepler Project, a community-driven collaboration among researchers at three otherUniversity of Californiacampuses has created theKepler scientific workflow systemwhich is based on Ptolemy II.
https://en.wikipedia.org/wiki/Ptolemy_Project
Flat memory modelorlinear memory modelrefers to amemory addressingparadigm in which "memoryappears to the program as a single contiguousaddress space."[1]TheCPUcan directly (andlinearly)addressall of the availablememorylocations without having to resort to any sort ofbank switching,memory segmentationorpagingschemes. Memory management andaddress translationcan still be implementedon top ofa flat memory model in order to facilitate theoperating system's functionality, resource protection,multitaskingor to increase the memory capacity beyond the limits imposed by the processor's physical address space, but the key feature of a flat memory model is that the entire memory space is linear, sequential and contiguous. In a simple controller, or in asingle taskingembedded application, where memory management is not needed nor desirable, the flat memory model is the most appropriate, because it provides the simplest interface from the programmer's point of view, with direct access to all memory locations and minimum design complexity. In a general purpose computer system, which requires multitasking, resource allocation, and protection, the flat memory system must be augmented by some memory management scheme, which is typically implemented through a combination of dedicated hardware (inside or outside the CPU) and software built into the operating system. The flat memory model (at the physical addressing level) still provides the greatest flexibility for implementing this type of memory management. Most modern memory models fall into one of three categories: Within the x86 architectures, when operating in thereal mode(or emulation), physical address is computed as:[2] (I.e., the 16-bit segment register is shifted left by 4 bits and added to a 16-bit offset, resulting in a 20-bit address.)
https://en.wikipedia.org/wiki/Flat_memory_model
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Asocial networkis asocial structureconsisting of a set ofsocialactors (such asindividualsor organizations), networks ofdyadicties, and othersocial interactionsbetween actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures.[1]The study of these structures usessocial network analysisto identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherentlyinterdisciplinaryacademic field which emerged fromsocial psychology,sociology,statistics, andgraph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations".[2]Jacob Morenois credited with developing the firstsociogramsin the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in thesocial and behavioral sciencesby the 1980s.[1][3]Social network analysisis now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with othercomplex networks, it forms part of the nascent field ofnetwork science.[4][5] The social network is atheoreticalconstructuseful in thesocial sciencesto study relationships between individuals,groups,organizations, or even entiresocieties(social units, seedifferentiation). The term is used to describe asocial structuredetermined by suchinteractions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. Anaxiomof the social network approach to understandingsocial interactionis that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is thatindividual agencyis often ignored[6]although this may not be the case in practice (seeagent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations,network analyticsare useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited toanthropology,biology,communication studies,economics,geography,information science,organizational studies,social psychology,sociology, andsociolinguistics. In the late 1890s, bothÉmile DurkheimandFerdinand Tönniesforeshadowed the idea of social networks in their theories and research ofsocial groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society").[7]Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors.[8]Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups.[9] Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently.[6][10][11]Inpsychology, in the 1930s,Jacob L. Morenobegan systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (seesociometry). Inanthropology, the foundation for social network theory is the theoretical andethnographicwork ofBronislaw Malinowski,[12]Alfred Radcliffe-Brown,[13][14]andClaude Lévi-Strauss.[15]A group of social anthropologists associated withMax Gluckmanand theManchester School, includingJohn A. Barnes,[16]J. Clyde MitchellandElizabeth Bott Spillius,[17][18]often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom.[6]Concomitantly, British anthropologistS. F. Nadelcodified a theory of social structure that was influential in later network analysis.[19]Insociology, the early (1930s) work ofTalcott Parsonsset the stage for taking a relational approach to understanding social structure.[20][21]Later, drawing upon Parsons' theory, the work of sociologistPeter Blauprovides a strong impetus for analyzing the relational ties of social units with his work onsocial exchange theory.[22][23][24] By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologistHarrison Whiteand his students at theHarvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time wereCharles Tilly, who focused on networks in political and community sociology and social movements, andStanley Milgram, who developed the "six degrees of separation" thesis.[25]Mark Granovetter[26]andBarry Wellman[27]are among the former students of White who elaborated and championed the analysis of social networks.[26][28][29][30] Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such asDuncan J. Watts,Albert-László Barabási,Peter Bearman,Nicholas A. Christakis,James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. In general, social networks areself-organizing,emergent, andcomplex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system.[32][33]These patterns become more apparent as network size increases. However, a global network analysis[34]of, for example, allinterpersonal relationshipsin the world is not feasible and is likely to contain so muchinformationas to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis.[35][36]The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Althoughlevels of analysisare not necessarilymutually exclusive, there are three general levels into which networks may fall:micro-level,meso-level, andmacro-level. At the micro-level, social network research typically begins with an individual,snowballingas social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: Adyadis a social relationship between two individuals. Network research on dyads may concentrate onstructureof the relationship (e.g. multiplexity, strength),social equality, and tendencies towardreciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have atriad. Research at this level may concentrate on factors such asbalanceandtransitivity, as well associal equalityand tendencies towardreciprocity/mutuality.[35]In thebalance theoryofFritz Heiderthe triad is the key to social dynamics. The discord in a rivalrouslove triangleis an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory ofsigned graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density,centrality,prestigeand roles such asisolates, liaisons, andbridges.[37]Such analyses, are most commonly used in the fields ofpsychologyorsocial psychology,ethnographickinshipanalysis or othergenealogicalstudies of relationships between individuals. Subset level:Subsetlevels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus ondistanceand reachability,cliques,cohesivesubgroups, or othergroup actionsorbehavior.[38] In general, meso-level theories begin with apopulationsize that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks.[39] Organizations: Formalorganizationsaresocial groupsthat distribute tasks for a collectivegoal.[40]Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms offormalorinformalrelationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures.[40]Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups.[41] Randomly distributed networks:Exponential random graph modelsof social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including generaldegree-based structural effects commonly observed in many human social networks as well asreciprocityandtransitivity, and at the node-level,homophilyandattribute-based activity and popularity effects, as derived from explicit hypotheses aboutdependenciesamong network ties.Parametersare given in terms of the prevalence of smallsubgraphconfigurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior.[42] Scale-free networks: Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at leastasymptotically. Innetwork theorya scale-free ideal network is arandom networkwith adegree distributionthat unravels the size distribution of social groups.[43]Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness ofverticeswith adegreethat greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows apower law.[44]TheBarabásimodel of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such aseconomicor otherresourcetransferinteractions over a largepopulation. Large-scale networks:Large-scale networkis a term somewhat synonymous with "macro-level." It is primarily used insocialandbehavioralsciences, and ineconomics. Originally, the term was used extensively in thecomputer sciences(seelarge-scale network mapping). Complex networks: Most larger social networks display features ofsocial complexity, which involves substantial non-trivial features ofnetwork topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see,complexity science,dynamical systemandchaos theory), as dobiological, andtechnological networks. Suchcomplex networkfeatures include a heavy tail in thedegree distribution, a highclustering coefficient,assortativityor disassortativity among vertices,community structure(seestochastic block model), andhierarchical structure. In the case ofagency-directednetworks these features also includereciprocity, triad significance profile (TSP, seenetwork motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such aslatticesandrandom graphs, do not show these features.[45] Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these areGraph theory,Balance theory, Social comparison theory, and more recently, theSocial identity approach.[46] Few complete theories have been produced from social network analysis. Two that have arestructural role theoryandheterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties".[47] In the context of networks,social capitalexists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections.[48]Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters.[49]When two separate clusters possess non-redundant information, there is said to be a structural hole between them.[49]Thus, a network that bridgesstructural holeswill provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes.[49] Networks rich in structural holes are a form of social capital in that they offerinformationbenefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters.[49]For example, inbusiness networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory ofweak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction.[50] Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist.[51][52]Other work examines how network grouping of artists can affect an individual artist's auction performance.[53]An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed throughtelecommunicationsdevices andsocial network services. Such devices and services require extensive and ongoing maintenance and analysis, often usingnetwork sciencemethods.Community developmentstudies, today, also make extensive use of such methods. Complex networksrequire methods specific to modelling and interpretingsocial complexityandcomplex adaptive systems, including techniques ofdynamic network analysis. Mechanisms such asDual-phase evolutionexplain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants incollective actionssuch asprotests; promotion of peaceful behavior,social norms, andpublic goodswithincommunitiesthrough networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats.[54] Incriminologyandurban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength.[55] Diffusion of ideas and innovationsstudies focus on the spread and use of ideas from one actor to another or onecultureand another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., byNicholas Christakisand collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages,[56][57]Indian slums,[58]or in the lab.[59]Still other experiments have documented the experimental induction of social contagion of voting behavior,[60]emotions,[61]risk perception,[62]and commercial products.[63] Indemography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents.[64][65] The field ofsociologyfocuses almost entirely on networks of outcomes of social interactions. More narrowly,economic sociologyconsiders behavioral interactions of individuals and groups throughsocial capitaland social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy.[66] Analysis of social networks is increasingly incorporated intohealth care analytics, not only inepidemiologicalstudies but also in models ofpatient communicationand education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations andsystems.[67] Human ecologyis aninterdisciplinaryandtransdisciplinarystudy of the relationship betweenhumansand theirnatural,social, andbuilt environments. The scientific philosophy of human ecology has a diffuse history with connections togeography,sociology,psychology,anthropology,zoology, and naturalecology.[68][69] In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo,[70]De Nooy,[71]Senekal,[72]andLotker,[73]to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings ofEven-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped usingvisualizationfrom SNA. Research studies offormalorinformal organizationrelationships,organizational communication,economics,economic sociology, and otherresourcetransfers. Social networks have also been used to examine how organizations interact with each other, characterizing the manyinformal connectionsthat link executives together, as well as associations and connections between individual employees at different organizations.[74]Many organizational social network studies focus onteams.[75]Withinteamnetwork studies, research assesses, for example, the predictors and outcomes ofcentralityand power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affectorganizational commitment,[76]organizational identification,[37]interpersonal citizenship behaviour.[77] Social capitalis a form ofeconomicandcultural capitalin which social networks are central,transactionsare marked byreciprocity,trust, andcooperation, andmarketagentsproducegoods and servicesnot mainly for themselves, but for acommon good.Social capitalis split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations.[78]This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations.[78]The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions.[78] Social capitalis a sociological concept about the value ofsocial relationsand the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use.[79][80][81]In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity.[79][82] This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understandconsumer behaviourand drive sales. In manyorganizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities.[48]Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economistJohn Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress."[83]Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking.[84]In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms.[85]By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations.[86]However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted.[48]Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networkscombined with social networking software produce a new medium for social interaction. A relationship over a computerizedsocial networking servicecan be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In acomputer-mediated communicationcontext, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise ofelectronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world.[87]Social network analysismethods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature ofsocial mediahas given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data.[88] Based on the pattern ofhomophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhoodsegregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area.[89]
https://en.wikipedia.org/wiki/Social_network
typeof, alternately alsotypeOf, andTypeOf, is anoperatorprovided by severalprogramming languagesto determine thedata typeof avariable. This is useful when constructing programs that must accept multiple types of data without explicitly specifying the type. In languages that supportpolymorphismandtype casting, the typeof operator may have one of two distinct meanings when applied to anobject. In some languages, such asVisual Basic,[1]the typeof operator returns thedynamic typeof the object. That is, it returns the true, original type of the object, irrespective of any type casting. In these languages, the typeof operator is the method for obtainingrun-time type information. In other languages, such asC#[2]orD[3]and, to some degree, in C (as part of nonstandard extensions andproposed standard revisions),[4][5]the typeof operator returns thestatic typeof the operand. That is, it evaluates to the declared type at that instant in the program, irrespective of its original form. These languages usually have other constructs for obtaining run-time type information, such astypeid. InC#: As ofC23typeof is a part of the C standard. The operator typeof_unqual was also added which is the same as typeof, except it removes cvr-qualification and atomic qualification.[6][7] In a non-standard (GNU) extension of theC programming language, typeof may be used to define a general macro for determining the maximum value of two parameters: Java does not have a keyword equivalent to typeof. All objects can use Object's getClass() method to return their class, which is always an instance of the Class class. All types can be explicitly named by appending ".class", even if they are not considered classes, for example int.class and String[].class . There is also the instanceof operator fortype introspectionwhich takes an instance and a class name, and returns true for all subclasses of the given class. InJavaScript: InTypeScript:[8] InVB.NET, the C# variant of "typeof" should be translated into the VB.NET'sGetTypemethod. TheTypeOfkeyword in VB.NET is used to compare an object reference variable to a data type. The following example usesTypeOf...Isexpressions to test the type compatibility of two object reference variables with various data types.
https://en.wikipedia.org/wiki/Typeof
Markov chain geostatisticsusesMarkov chainspatial models,simulationalgorithmsand associated spatialcorrelationmeasures (e.g.,transiogram) based on the Markov chain random field theory, which extends a singleMarkov chaininto a multi-dimensional random field forgeostatistical modeling. A Markov chain random field is still a single spatial Markov chain. The spatial Markov chain moves or jumps in a space and decides its state at any unobserved location through interactions with its nearest known neighbors in different directions. The data interaction process can be well explained as a local sequential Bayesian updating process within a neighborhood. Because single-step transition probabilitymatricesare difficult to estimate from sparsesampledata and are impractical in representing the complex spatialheterogeneityof states, thetransiogram, which is defined as atransition probabilityfunctionover the distance lag, is proposed as the accompanying spatial measure of Markov chain random fields.
https://en.wikipedia.org/wiki/Markov_chain_geostatistics
Inlinguistic typology,active–stative alignment(alsosplit intransitive alignmentorsemantic alignment) is a type ofmorphosyntactic alignmentin which the soleargument("subject") of anintransitiveclause (often symbolized asS) is sometimes marked in the same way as anagentof atransitive verb(that is, like asubjectsuch as "I" or "she" inEnglish) but other times in the same way as a direct object (such as "me" or "her" in English). Languages with active–stative alignment are often calledactive languages. Thecaseoragreementof the intransitive argument (S) depends on semantic or lexical criteria particular to each language. The criteria tend to be based on the degree ofvolition, or control over the verbal action exercised by the participant. For example, if one tripped and fell, an active–stative language might require one to say the equivalent of "fell me." To say "I fell" would mean that the person had done it on purpose, such as taking a fall in boxing. Another possibility is empathy; for example, if someone's dog were run over by a car, one might say the equivalent of "died her." To say "she died" would imply that the person was not affected emotionally. If the core arguments of a transitive clause are termedA(agentof a transitive verb) andP(patientof a transitive verb), active–stative languages can be described as languages that align intransitiveSasS = P/O∗∗("fell me") orS = A("I fell"), depending on the criteria described above. Active–stative languages contrast withaccusative languagessuch as English that generally alignSasS = A, and withergative languagesthat generally alignSasS = P/O. From this we can deduce that there are two types ofSin Active languages. On the other hand, in Ergative languages some types ofO/Pcan beO/P=A, and in this respect, we have to consider that there are also two types ofOin Ergative languages. Active languages can be said to be a phenomenon at the intersection of these complex issues. For most such languages, the case of the intransitive argument is lexically fixed for each verb, regardless of the actual degree of volition of the subject, but often corresponding to the most typical situation. For example, the argument ofswimmay always be treated like the transitive subject (agent-like), and the argument ofsleeplike the transitive direct object (patient-like). InDakota, arguments of active verbs such asto runare marked like transitive agents, as in accusative languages, and arguments of inactive verbs such asto standare marked like transitive objects, as in ergative languages. In such language, if the subject of a verb likerunorswallowis defined as agentive, it will be always marked so even if the action of swallowing is involuntary. This subtype is sometimes known assplit-S. In other languages, the marking of the intransitive argument is decided by the speaker, based on semantic considerations. For any given intransitive verb, the speaker may choose whether to mark the argument as agentive or patientive. In some of these languages, agentive marking encodes a degree ofvolitionor control over the action, with thepatientiveused as the default case; in others, patientive marking encodes a lack of volition or control, suffering from or being otherwise affected by the action, or sympathy on the part of the speaker, with the agentive used as the default case. These two subtypes (patientive-defaultandagentive-default) are sometimes known asfluid-S. If the language hasmorphologicalcase, the arguments of atransitive verbare marked by using the agentive case for the subject and the patientive case for the object. The argument of anintransitive verbmay be marked as either.[1] Languages lacking caseinflectionsmay indicate case by differentword orders,verb agreement, usingadpositions, etc. For example, the patientive argument might precede theverb, and the agentive argument might follow the verb. Cross-linguistically, the agentive argument tends to be marked, and the patientive argument tends to be unmarked. That is, if one case is indicated by zero-inflection, it is often the patientive. Additionally, active languages differ from ergative languages in how split case marking intersects with Silverstein's (1976) nominal hierarchy: Specifically, ergative languages with split case marking are more likely to use ergative rather than accusative marking for NPs lower down the hierarchy (to the right), whereas active languages are more likely to use active marking for NPs higher up the hierarchy (to the left), like first and second person pronouns.[2]Dixon states that "In active languages, if active marking applies to an NP type a, it applies to every NP type to the left of a on the nominal hierarchy." Active languages are a relatively new field of study. Activemorphosyntactic alignmentused to be not recognized as such, and it was treated mostly as an interesting deviation from the standard alternatives (nominative–accusative and ergative–absolutive). Also, active languages are few and often show complications and special cases ("pure" active alignment is an ideal).[3] Thus, the terminology used is rather flexible. The morphosyntactic alignment of active languages is also termedactive–stative alignmentorsemantic alignment. The termsagentive caseandpatientive caseused above are sometimes replaced by the termsactiveandinactive. (†) = extinct language According to Castro Alves (2010), a split-S alignment can be safely reconstructed for Proto-Northern Jê finite clauses. Clauses headed by a non-finite verb, on the contrary, would have been alignedergativelyin this reconstructed language. The reconstructedPre-Proto-Indo-Europeanlanguage,[7]not to be confused with theProto-Indo-European language, its direct descendant, shows many features known to correlate with active alignment like the animate vs. inanimate distinction, related to the distinction between active and inactive or stative verb arguments. Even in its descendant languages, there are traces of a morphological split between volitional and nonvolitional verbs, such as a pattern in verbs of perception and cognition where the argument takes an oblique case (calledquirky subject), a relic of which can be seen inMiddle Englishmethinksor in the distinction betweenseevs.lookorhearvs.listen. Other possible relics from a structure, in descendant languages of Indo-European, include conceptualization of possession and extensive use of particles.
https://en.wikipedia.org/wiki/Split_intransitivity
Anacademic mobility networkis an informal association of universities and government programs that encourages the international exchange ofhigher educationstudents (academic mobility).[1][2] Students choosing to study abroad (International students) aim to improve their own social and economic status by choosing to study in a nation with better systems of educations than their own. This creates movement of students, usually South to North and East to West.[3]It is predicted that citizens of Asian nations, particularly India and China, will represent an increasing portion of the global international student population.[4] The total number of students enrolled intertiary educationabroad (international students) increased from 1.3 million in 1990, to 2 million in 2000, to more than 3 million in 2010 and to 4.3 million in 2011.[5][6]Thefinancial crisis of 2007–2008did not decrease these figures.[5] The formation ofacademic mobilitynetworks can be explained by changes in systems of education. The governments of some countries allocated funds to improve tertiary education for international students. For some countries, the presence of international students represents an indicator of quality of their education system. International students contribute to the economy of their chosen country of study. In 2011, OECD countries were hosting seventy percent of international students. Within the OECD, almost half of international students were enrolled in one of the top five destinations for tertiary studies. These were United States (17 percent), United Kingdom (13 percent), Australia (6 percent), Germany (6 percent) and France (6 percent). International students prefer to study in English-speaking countries. Popular fields of study are thesocial sciences,businessandlaw. Thirty percent of international students studied in these fields in 2011.[5] Academic mobility networks aim to assist students by providing cultural and social diversity, encouraging adaptability and independent thinking, allowing them to improve their knowledge of a foreign language and expand their professional network. By bringing international students, the network can provide educational institutions with a source of revenue and contribute to the nation's economy. For example, inCanada, international student expenditure on tuition, accommodation and living expenses contributed more than CAD (Canadian dollar) 8 billion to the economy in 2010.[5]International students also have a long-term economic effect. Their stay after graduation increases the domestic skilledlabor market. In the 2008-2009 year, the rate of staying inOECDcountries was 25 percent. InAustralia,Canada,the Czech Republic, andFrance, the rate was greater than 30 percent.[5]In 2005, 27 percent of international students from aEuropean Unionmember state were employed in theUKsix months after graduation. InNorway, 18 percent of students from outside theEuropean Economic Area(EEA) who were studying between 1991 and 2005 stayed in the country; the corresponding number for EEA students was eight percent.[7] In the United States, educational exchange programs are generally managed by theBureau of Educational and Cultural Affairs.Education in the United Statesconsists of thousands of colleges and universities. Diversity in schools and subjects provides choice to international students.[8] After the terrorist attack of September 2001 international student enrolment in the United States declined for the first time in 30 years. It was more difficult to obtainvisas, other countries competed for international student enrolments andanti-American sentimentincreased.[7][9] TheBologna processis a European initiative to promote international student mobility. Quality is a core element of theEuropean Higher Education Areawith an emphasis on multi-linguistic skills.Erasmus programmehas supported European student exchanges since 1987. In 1987, around 3,000 students received grants to study for a period of 6 to 12 months at a host university of another of the twelve European member states. In 2012, the budget for the Erasmus Program was 129.1 billionEuros.[10]
https://en.wikipedia.org/wiki/Academic_mobility_network
Inmathematics, and in particular,algebra, ageneralized inverse(or,g-inverse) of an elementxis an elementythat has some properties of aninverse elementbut not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices thaninvertible matrices. Generalized inverses can be defined in anymathematical structurethat involvesassociativemultiplication, that is, in asemigroup. This article describes generalized inverses of amatrixA{\displaystyle A}. A matrixAg∈Rn×m{\displaystyle A^{\mathrm {g} }\in \mathbb {R} ^{n\times m}}is a generalized inverse of a matrixA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}ifAAgA=A.{\displaystyle AA^{\mathrm {g} }A=A.}[1][2][3]A generalized inverse exists for an arbitrary matrix, and when a matrix has aregular inverse, this inverse is its unique generalized inverse.[1] Consider thelinear system whereA{\displaystyle A}is anm×n{\displaystyle m\times n}matrix andy∈C(A),{\displaystyle y\in {\mathcal {C}}(A),}thecolumn spaceofA{\displaystyle A}. Ifm=n{\displaystyle m=n}andA{\displaystyle A}isnonsingularthenx=A−1y{\displaystyle x=A^{-1}y}will be the solution of the system. Note that, ifA{\displaystyle A}is nonsingular, then Now supposeA{\displaystyle A}is rectangular (m≠n{\displaystyle m\neq n}), or square and singular. Then we need a right candidateG{\displaystyle G}of ordern×m{\displaystyle n\times m}such that for ally∈C(A),{\displaystyle y\in {\mathcal {C}}(A),} That is,x=Gy{\displaystyle x=Gy}is a solution of the linear systemAx=y{\displaystyle Ax=y}. Equivalently, we need a matrixG{\displaystyle G}of ordern×m{\displaystyle n\times m}such that Hence we can define thegeneralized inverseas follows: Given anm×n{\displaystyle m\times n}matrixA{\displaystyle A}, ann×m{\displaystyle n\times m}matrixG{\displaystyle G}is said to be a generalized inverse ofA{\displaystyle A}ifAGA=A.{\displaystyle AGA=A.}‍[1][2][3]The matrixA−1{\displaystyle A^{-1}}has been termed aregular inverseofA{\displaystyle A}by some authors.[5] Important types of generalized inverse include: Some generalized inverses are defined and classified based on the Penrose conditions: where∗{\displaystyle {}^{*}}denotes conjugate transpose. IfAg{\displaystyle A^{\mathrm {g} }}satisfies the first condition, then it is ageneralized inverseofA{\displaystyle A}. If it satisfies the first two conditions, then it is areflexive generalized inverseofA{\displaystyle A}. If it satisfies all four conditions, then it is thepseudoinverseofA{\displaystyle A}, which is denoted byA+{\displaystyle A^{+}}and also known as theMoore–Penrose inverse, after the pioneering works byE. H. MooreandRoger Penrose.[2][7][8][9][10][11]It is convenient to define anI{\displaystyle I}-inverseofA{\displaystyle A}as an inverse that satisfies the subsetI⊂{1,2,3,4}{\displaystyle I\subset \{1,2,3,4\}}of the Penrose conditions listed above. Relations, such asA(1,4)AA(1,3)=A+{\displaystyle A^{(1,4)}AA^{(1,3)}=A^{+}}, can be established between these different classes ofI{\displaystyle I}-inverses.[1] WhenA{\displaystyle A}is non-singular, any generalized inverseAg=A−1{\displaystyle A^{\mathrm {g} }=A^{-1}}and is therefore unique. For a singularA{\displaystyle A}, some generalised inverses, such as the Drazin inverse and the Moore–Penrose inverse, are unique, while others are not necessarily uniquely defined. Let Sincedet(A)=0{\displaystyle \det(A)=0},A{\displaystyle A}is singular and has no regular inverse. However,A{\displaystyle A}andG{\displaystyle G}satisfy Penrose conditions (1) and (2), but not (3) or (4). Hence,G{\displaystyle G}is a reflexive generalized inverse ofA{\displaystyle A}. Let SinceA{\displaystyle A}is not square,A{\displaystyle A}has no regular inverse. However,AR−1{\displaystyle A_{\mathrm {R} }^{-1}}is a right inverse ofA{\displaystyle A}. The matrixA{\displaystyle A}has no left inverse. The elementbis a generalized inverse of an elementaif and only ifa⋅b⋅a=a{\displaystyle a\cdot b\cdot a=a}, in any semigroup (orring, since themultiplicationfunction in any ring is a semigroup). The generalized inverses of the element 3 in the ringZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }are 3, 7, and 11, since in the ringZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }: The generalized inverses of the element 4 in the ringZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }are 1, 4, 7, and 10, since in the ringZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }: If an elementain a semigroup (or ring) has an inverse, the inverse must be the only generalized inverse of this element, like the elements 1, 5, 7, and 11 in the ringZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }. In the ringZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }any element is a generalized inverse of 0; however 2 has no generalized inverse, since there is nobinZ/12Z{\displaystyle \mathbb {Z} /12\mathbb {Z} }such that2⋅b⋅2=2{\displaystyle 2\cdot b\cdot 2=2}. The following characterizations are easy to verify: Any generalized inverse can be used to determine whether asystem of linear equationshas any solutions, and if so to give all of them. If any solutions exist for then×mlinear system with vectorx{\displaystyle x}of unknowns and vectorb{\displaystyle b}of constants, all solutions are given by parametric on the arbitrary vectorw{\displaystyle w}, whereAg{\displaystyle A^{\mathrm {g} }}is any generalized inverse ofA{\displaystyle A}. Solutions exist if and only ifAgb{\displaystyle A^{\mathrm {g} }b}is a solution, that is, if and only ifAAgb=b{\displaystyle AA^{\mathrm {g} }b=b}. IfAhas full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique.[12] The generalized inverses of matrices can be characterized as follows. LetA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}, and A=U[Σ1000]VT{\displaystyle A=U{\begin{bmatrix}\Sigma _{1}&0\\0&0\end{bmatrix}}V^{\operatorname {T} }} be itssingular-value decomposition. Then for any generalized inverseAg{\displaystyle A^{g}}, there exist[1]matricesX{\displaystyle X},Y{\displaystyle Y}, andZ{\displaystyle Z}such that Ag=V[Σ1−1XYZ]UT.{\displaystyle A^{g}=V{\begin{bmatrix}\Sigma _{1}^{-1}&X\\Y&Z\end{bmatrix}}U^{\operatorname {T} }.} Conversely, any choice ofX{\displaystyle X},Y{\displaystyle Y}, andZ{\displaystyle Z}for matrix of this form is a generalized inverse ofA{\displaystyle A}.[1]The{1,2}{\displaystyle \{1,2\}}-inverses are exactly those for whichZ=YΣ1X{\displaystyle Z=Y\Sigma _{1}X}, the{1,3}{\displaystyle \{1,3\}}-inverses are exactly those for whichX=0{\displaystyle X=0}, and the{1,4}{\displaystyle \{1,4\}}-inverses are exactly those for whichY=0{\displaystyle Y=0}. In particular, the pseudoinverse is given byX=Y=Z=0{\displaystyle X=Y=Z=0}: A+=V[Σ1−1000]UT.{\displaystyle A^{+}=V{\begin{bmatrix}\Sigma _{1}^{-1}&0\\0&0\end{bmatrix}}U^{\operatorname {T} }.} In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse,A+,{\displaystyle A^{+},}satisfies the following definition of consistency with respect to transformations involving unitary matricesUandV: The Drazin inverse,AD{\displaystyle A^{\mathrm {D} }}satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrixS: The unit-consistent (UC) inverse,[13]AU,{\displaystyle A^{\mathrm {U} },}satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matricesDandE: The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers.
https://en.wikipedia.org/wiki/Generalized_inverse
Taher Elgamal[a](Arabic: طاهر الجمل) (born 18 August 1955) is an Egyptian-Americancryptographerand tech executive.[1]Since January 2023, he has been a partner at venture capital firmEvolution Equity Partners.[2]Prior to that, he was the founder and CEO of Securify and the director of engineering at RSA Security. From 1995 to 1998, he was the chief scientist atNetscape Communications.From 2013 to 2023, he served as the Chief Technology Officer (CTO) of Security atSalesforce.[3][4] Elgamal's 1985 paper entitled "A Public Key Cryptosystem and A Signature Scheme Based on Discrete Logarithms" proposed the design of theElGamal discrete log cryptosystemand of theElGamal signature scheme.[5]The latter scheme became the basis forDigital Signature Algorithm(DSA) adopted byNational Institute of Standards and Technology(NIST) as theDigital Signature Standard(DSS). His development of theSecure Sockets Layer(SSL)cryptographic protocolat Netscape in the 1990s was also the basis for theTransport Layer Security(TLS) andHTTPSInternet protocols.[6][7][8] According to an article on Medium,[9]Elgamal's first love wasmathematics. Although he came to the United States to pursue a PhD in Electrical Engineering at Stanford University, he said that "cryptography was the most beautiful use of math he'd ever seen". Elgamal earned a BSc fromCairo Universityin 1977, and MS and PhD degrees inElectrical EngineeringfromStanford Universityin 1981 and 1984, respectively.Martin Hellmanwas his dissertation advisor.[10] Elgamal joined the technical staff atHP Labsin 1984. He served as chief scientist at Netscape Communications from 1995 to 1998,[11]where he was a driving force behindSecure Sockets Layer.[12]Network Worlddescribed him as the "father ofSSL."[6]SSL was the basis for theTransport Layer Security(TLS)[7]andHTTPSInternet protocols.[8][13] He also was the director of engineering atRSA SecurityInc.[14]before foundingSecurifyin 1998 and becoming itschief executive officer. According to an interview with Elgamal,[15]when Securify was acquired byKroll-O'Gara,[16]he became the president of itsinformation securitygroup. After helping Securify spin out from Kroll-O'Gara,[17]Taher served as the company's chief technology officer (CTO) from 2001 to 2004.[18]In late 2008, Securify was acquired by Secure Computing[19]and is now part ofMcAfee.[20]In October 2006, he joinedTumbleweed Communicationsas a CTO.[21]Tumbleweed was acquired in 2008 by Axway Inc. Until 2023, Elgamal was CTO for security at Salesforce.com.[4][9][22]He now works as a partner at Evolution Equity Partners.[2] Elgamal is a co-founder of NokNok Labs[23]and InfoSec Global.[citation needed]He serves as a director of Vindicia, Inc.,[24]which provides online payment services,ZixCorporation, which provides email encryption services, and Bay Dynamics.[25]He has served as an adviser to Cyphort, Bitglass, Onset Ventures, Glenbrook Partners, PGP corporation, Arcot Systems, Finjan, Actiance, Symplified, and Zetta. He served as Chief Security Officer ofAxway, Inc. He is vice chairman of SecureMisr. Elgamal has also held executive roles at technology and security companies, including As a scholar, Elgamal published 4 articles:
https://en.wikipedia.org/wiki/Taher_Elgamal
Advanced Vector Extensions(AVX, also known asGesher New Instructionsand thenSandy Bridge New Instructions) areSIMDextensions to thex86instruction set architectureformicroprocessorsfromIntelandAdvanced Micro Devices(AMD). They were proposed by Intel in March 2008 and first supported by Intel with theSandy Bridge[1]microarchitecture shipping in Q1 2011 and later by AMD with theBulldozer[2]microarchitecture shipping in Q4 2011. AVX provides new features, new instructions, and a new coding scheme. AVX2(also known asHaswell New Instructions) expands most integer commands to 256 bits and introduces new instructions. They were first supported by Intel with theHaswellmicroarchitecture, which shipped in 2013. AVX-512expands AVX to 512-bit support using a newEVEX prefixencoding proposed by Intel in July 2013 and first supported by Intel with theKnights Landingco-processor, which shipped in 2016.[3][4]In conventional processors, AVX-512 was introduced withSkylakeserver and HEDT processors in 2017. AVX uses sixteen YMM registers to perform a single instruction on multiple pieces of data (seeSIMD). Each YMM register can hold and do simultaneous operations (math) on: The width of the SIMD registers is increased from 128 bits to 256 bits, and renamed from XMM0–XMM7 to YMM0–YMM7 (inx86-64mode, from XMM0–XMM15 to YMM0–YMM15). The legacySSEinstructions can still be utilized via theVEX prefixto operate on the lower 128 bits of the YMM registers. AVX introduces a three-operand SIMD instruction format calledVEX coding scheme, where the destination register is distinct from the two source operands. For example, anSSEinstruction using the conventional two-operand forma←a+bcan now use a non-destructive three-operand formc←a+b, preserving both source operands. Originally, AVX's three-operand format was limited to the instructions with SIMD operands (YMM), and did not include instructions with general purpose registers (e.g. EAX). It was later used for coding new instructions on general purpose registers in later extensions, such asBMI. VEX coding is also used for instructions operating on the k0-k7 mask registers that were introduced withAVX-512. Thealignmentrequirement of SIMD memory operands is relaxed.[5]Unlike their non-VEX coded counterparts, most VEX coded vector instructions no longer require their memory operands to be aligned to the vector size. Notably, theVMOVDQAinstruction still requires its memory operand to be aligned. The newVEX coding schemeintroduces a new set of code prefixes that extends theopcodespace, allows instructions to have more than two operands, and allows SIMD vector registers to be longer than 128 bits. The VEX prefix can also be used on the legacy SSE instructions giving them a three-operand form, and making them interact more efficiently with AVX instructions without the need forVZEROUPPERandVZEROALL. The AVX instructions support both 128-bit and 256-bit SIMD. The 128-bit versions can be useful to improve old code without needing to widen the vectorization, and avoid the penalty of going from SSE to AVX, they are also faster on some early AMD implementations of AVX. This mode is sometimes known as AVX-128.[6] These AVX instructions are in addition to the ones that are 256-bit extensions of the legacy 128-bit SSE instructions; most are usable on both 128-bit and 256-bit operands. Issues regarding compatibility between future Intel and AMD processors are discussed underXOP instruction set. AVX adds new register-state through the 256-bit wide YMM register file, so explicitoperating systemsupport is required to properly save and restore AVX's expanded registers betweencontext switches. The following operating system versions support AVX: Advanced Vector Extensions 2 (AVX2), also known asHaswell New Instructions,[24]is an expansion of the AVX instruction set introduced in Intel'sHaswell microarchitecture. AVX2 makes the following additions: Sometimes three-operandfused multiply-accumulate(FMA3) extension is considered part of AVX2, as it was introduced by Intel in the same processor microarchitecture. This is a separate extension using its ownCPUIDflag and is described onits own pageand not below. AVX-512are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture proposed byIntelin July 2013.[3] AVX-512 instructions are encoded with the newEVEX prefix. It allows 4 operands, 8 new 64-bitopmask registers, scalar memory mode with automatic broadcast, explicit rounding control, and compressed displacement memoryaddressing mode. The width of the register file is increased to 512 bits and total register count increased to 32 (registers ZMM0-ZMM31) in x86-64 mode. AVX-512 consists of multiple instruction subsets, not all of which are meant to be supported by all processors implementing them. The instruction set consists of the following: Only the core extension AVX-512F (AVX-512 Foundation) is required by all implementations, though all current implementations also support CD (conflict detection). All central processors with AVX-512 also support VL, DQ and BW. The ER, PF, 4VNNIW and 4FMAPS instruction set extensions are currently only implemented in Intel computing coprocessors. The updated SSE/AVX instructions in AVX-512F use the same mnemonics as AVX versions; they can operate on 512-bit ZMM registers, and will also support 128/256 bit XMM/YMM registers (with AVX-512VL) and byte, word, doubleword and quadword integer operands (with AVX-512BW/DQ and VBMI).[26]: 23 [28] ^Note 1: Intel does not officially support AVX-512 family of instructions on theAlder Lakemicroprocessors. In early 2022, Intel began disabling in silicon (fusing off) AVX-512 in Alder Lake microprocessors to prevent customers from enabling AVX-512.[29]In older Alder Lake family CPUs with some legacy combinations of BIOS and microcode revisions, it was possible to execute AVX-512 family instructions when disabling all the efficiency cores which do not contain the silicon for AVX-512.[30][31][32] AVX-VNNI is aVEX-coded variant of theAVX512-VNNIinstruction set extension. Similarly, AVX-IFMA is aVEX-coded variant ofAVX512-IFMA. These extensions provide the same sets of operations as their AVX-512 counterparts, but are limited to 256-bit vectors and do not support any additional features ofEVEXencoding, such as broadcasting, opmask registers or accessing more than 16 vector registers. These extensions allow support of VNNI and IFMA operations even when fullAVX-512support is not implemented in the processor. AVX10, announced in July 2023,[38]is a new, "converged" AVX instruction set. It addresses several issues of AVX-512, in particular that it is split into too many parts[39](20 feature flags). The initial technical paper also made 512-bit vectors optional to support, but as of revision 3.0 vector length enumeration is removed and 512-bit vectors are mandatory.[40] AVX10 presents a simplified CPUID interface to test for instruction support, consisting of the AVX10 version number (indicating the set of instructions supported, with later versions always being a superset of an earlier one).[41]For example, AVX10.2 indicates that a CPU is capable of the second version of AVX10.[42]Initial revisions of the AVX10 technical specifications also included maximum supported vector length as part of the ISA extension name, e.g. AVX10.2/256 would mean a second version of AVX10 with vector length up to 256 bits, but later revisions made that unnecessary. The first version of AVX10, notated AVX10.1, doesnotintroduce any instructions or encoding features beyond what is already in AVX-512 (specifically, in IntelSapphire Rapids: AVX-512F, CD, VL, DQ, BW, IFMA, VBMI, VBMI2, BITALG, VNNI, GFNI, VPOPCNTDQ, VPCLMULQDQ, VAES, BF16, FP16). For CPUs supporting AVX10 and 512-bit vectors, all legacy AVX-512 feature flags will remain set to facilitate applications supporting AVX-512 to continue using AVX-512 instructions.[42] AVX10.1 was first released in IntelGranite Rapids[42](Q3 2024) and AVX10.2 will be available inDiamond Rapids.[43] APX is a new extension. It is not focused on vector computation, but provides RISC-like extensions to the x86-64 architecture by doubling the number of general-purpose registers to 32 and introducing three-operand instruction formats. AVX is only tangentially affected as APX introduces extended operands.[44][45] Since AVX instructions are wider, they consume more power and generate more heat. Executing heavy AVX instructions at high CPU clock frequencies may affect CPU stability due to excessivevoltage droopduring load transients. Some Intel processors have provisions to reduce theTurbo Boostfrequency limit when such instructions are being executed. This reduction happens even if the CPU hasn't reached its thermal and power consumption limits. OnSkylakeand its derivatives, the throttling is divided into three levels:[66][67] The frequency transition can be soft or hard. Hard transition means the frequency is reduced as soon as such an instruction is spotted; soft transition means that the frequency is reduced only after reaching a threshold number of matching instructions. The limit is per-thread.[66] InIce Lake, only two levels persist:[68] Rocket Lakeprocessors do not trigger frequency reduction upon executing any kind of vector instructions regardless of the vector size.[68]However, downclocking can still happen due to other reasons, such as reaching thermal and power limits. Downclocking means that using AVX in a mixed workload with an Intel processor can incur a frequency penalty. Avoiding the use of wide and heavy instructions help minimize the impact in these cases. AVX-512VL allows for using 256-bit or 128-bit operands in AVX-512 instructions, making it a sensible default for mixed loads.[69] On supported and unlocked variants of processors that down-clock, the clock ratio reduction offsets (typically called AVX and AVX-512 offsets) are adjustable and may be turned off entirely (set to 0x) via Intel's Overclocking / Tuning utility or in BIOS if supported there.[70]
https://en.wikipedia.org/wiki/Advanced_Vector_Extensions
Ingrammarandtheoretical linguistics,governmentorrectionrefers to the relationship between a word and its dependents. One can discern between at least three concepts of government: the traditional notion ofcase government, the highly specialized definition of government in somegenerativemodels ofsyntax, and a much broader notion independency grammars. In traditional Latin and Greek (and other) grammars, government is the control byverbsandprepositionsof the selection of grammatical features of other words. Most commonly, a verb or preposition is said to "govern" a specificgrammatical caseif its complement must take that case in a grammatically correct structure (see:case government).[1]For example, inLatin, mosttransitive verbsrequire theirdirect objectto appear in theaccusative case, while thedative caseis reserved forindirect objects. Thus, the phraseI see youwould be rendered asTevideoin Latin, using the accusative formtefor the second person pronoun, andI give a present to youwould be rendered asTibi donumdo, using both an accusative (donum) for the direct and a dative (tibi; the dative of the second person pronoun) for the indirect object; the phraseI help you, however, would be rendered asTibifaveo, using only the dative formtibi. The verbfavere(to help), like many others, is an exception to this default government pattern: its one and only object must be in the dative. Although no direct object in the accusative is controlled by the specific verb, this object is traditionally considered to be an indirect one, mainly becausepassivizationis unavailable except perhaps in an impersonal manner and for certain verbs of this type. A semantic alternation may also be achieved when different case constructions are available with a verb:Idcredo(idis an accusative) meansI believe this, I have this opinionandEicredo(eiis a dative) meansI trust this, I confide in this. Prepositions (and postpositions and circumpositions, i.e.adpositions) are like verbs in their ability to govern the case of their complement, and like many verbs, many adpositions can govern more than one case, with distinct interpretations. For examplein Italywould beinItalia,Italiabeing anablativecase form, buttowards Italywould beinItaliam,Italiambeing an accusative case form. The abstract syntactic relation of government ingovernment and binding theory, aphrase structure grammar, is an extension of the traditional notion of case government.[2]Verbs govern their objects, and more generally,headsgovern their dependents.AgovernsBif and only if:[3] This definition is explained in more detail in thegovernmentsection of the article on government and binding theory. One sometimes encounters definitions of government that are much broader than the one just produced. Government is understood as the property that regulates which words can or must appear with the referenced word.[4]This broader understanding of government is part of manydependency grammars. The notion is that many individual words in a given sentence can appear only by virtue of the fact that some other word appears in that sentence. According to this definition, government occurs between any two words connected by a dependency, the dominant word opening slots for subordinate words. The dominant word is thegovernor, and the subordinates are itsgovernees. The following dependency tree illustrates governors and governees: The wordhasgovernsFredandordered; in other words,hasis governor over its governeesFredandordered. Similarly,orderedgovernsdishandfor, that is,orderedis governor over its governeesdishandfor; etc. This understanding of government is widespread among dependency grammars.[5] The distinction between the termsgovernorandheadis a source of confusion, given the definitions of government produced above. Indeed,governorandheadare overlapping concepts. The governor and the head of a given word will often be one and the same other word. The understanding of these concepts becomes difficult, however, whendiscontinuitiesare involved. The following example of aw-frontingdiscontinuity from German illustrates the difficulty: Wem who-DAT denkst think du you haben have sie they geholfen? helped? Wem denkst du haben sie geholfen? who-DAT think you have they helped? 'Who do you think they helped?' Two of the criteria mentioned above for identifying governors (and governees) are applicable to the interrogative pronounwem'whom'. This pronoun receives dative case from the verbgeholfen'helped' (= case government) and it can appear by virtue of the fact thatgeholfenappears (= licensing). Given these observations, one can make a strong argument thatgeholfenis the governor ofwem, even though the two words are separated from each other by the rest of the sentence. In such constellations, one sometimes distinguishes betweenheadandgovernor.[6]So while the governor ofwemisgeholfen, the head ofwemis taken to be the finite verbdenkst'think'. In other words, when a discontinuity occurs, one assumes that the governor and the head (of the relevant word) are distinct, otherwise they are the same word. Exactly how the termsheadandgovernorare used can depend on the particular theory of syntax that is employed.
https://en.wikipedia.org/wiki/Government_(linguistics)
Aping of deathis a type of attack on a computer system that involves sending amalformedor otherwise maliciouspingto a computer.[1]In this attack, a host sends hundreds of ping requests with a packet size that is large or illegal to another host to try to take it offline or to keep it preoccupied responding withICMP Echoreplies.[2] A correctly formed ping packet is typically 56bytesin size, or 64 bytes when theInternet Control Message Protocol(ICMP) header is considered, and 84 bytes includingInternet Protocol(IP) version 4 header. However, anyIPv4packet (including pings) may be as large as 65,535 bytes. Some computer systems were never designed to properly handle a ping packet larger than the maximum packet size because it violates theInternet Protocol.[3][4]Like other large but well-formed packets, a ping of death is fragmented into groups of 8 octets before transmission. However, when the target computer reassembles the malformed packet, abuffer overflowcan occur, causing asystem crashand potentially allowing theinjection of malicious code. The excessive byte size prevents the machine from processing it effectively, impacting the cloud environment and causing disruptions in the operating system processes leading torebootsorcrashes.[5] In early implementations ofTCP/IP, this bug is easy to exploit and can affect a wide variety of systems includingUnix,Linux,Mac,Windows, and peripheral devices. As systems began filtering out pings of death through firewalls and other detection methods, a different kind of ping attack known asping floodinglater appeared, which floods the victim with so many ping requests that normal traffic fails to reach the system (a basicdenial-of-service attack). The ping of death attack has been largely neutralized by advancements in technology. Devices produced after 1998 include defenses against such attacks,[specify]rendering them resilient to this specific threat. However, in a notable development, a variant targetingIPv6packets on Windows systems was identified, leadingMicrosoftto release a patch in mid-2013.[6] The maximum packet length of an IPv4 packet including the IP header is 65,535 (216− 1) bytes,[3]a limitation presented by the use of a 16-bit wide IP header field that describes the total packet length. The underlyingdata link layeralmost always poses limits to the maximum frame size (SeeMTU). InEthernet, this is typically 1500 bytes. In such a case, a large IP packet is split across multiple IP packets (also known as IP fragments), so that each IP fragment will match the imposed limit. The receiver of the IP fragments will reassemble them into the complete IP packet and continue processing it as usual. Whenfragmentationis performed, each IP fragment needs to carry information about which part of the original IP packet it contains. This information is kept in the Fragment Offset field, in the IP header. The field is 13 bits long, and contains the offset of the data in the current IP fragment, in the original IP packet. The offset is given in units of 8 bytes. This allows a maximum offset of 65,528 ((213-1)*8). Then when adding 20 bytes of IP header, the maximum will be 65,548 bytes, which exceeds the maximum frame size. This means that an IP fragment with the maximum offset should have data no larger than 7 bytes, or else it would exceed the limit of the maximum packet length. Amalicious usercan send an IP fragment with the maximum offset and with much more data than 8 bytes (as large as the physical layer allows it to be). When the receiver assembles all IP fragments, it will end up with an IP packet which is larger than 65,535 bytes. This may possibly overflow memory buffers which the receiver allocated for the packet, and can cause various problems. As is evident from the description above, the problem has nothing to do withICMP, which is used only as payload, big enough to exploit the problem. It is a problem in the reassembly process of IP fragments, which may contain any type of protocol (TCP,UDP,IGMP, etc.). The correction of the problem is to add checks in the reassembly process. The check for each incoming IP fragment makes sure that the sum of "Fragment Offset" and "Total length" fields in the IP header of each IP fragment is smaller or equal to 65,535. If the sum is greater, then the packet is invalid, and the IP fragment is ignored. This check is performed by somefirewalls, to protect hosts that do not have the bug fixed. Another fix for the problem is using a memory buffer larger than 65,535 bytes for the re-assembly of the packet. (This is essentially a breaking of the specification, since it adds support for packets larger than those allowed.) In 2013, an IPv6 version of the ping of death vulnerability was discovered inMicrosoft Windows. Windows TCP/IP stack did not handle memory allocation correctly when processing incoming malformedICMPv6packets, which could cause remote denial of service. This vulnerability was fixed in MS13-065 in August 2013.[7][8]TheCVE-IDfor this vulnerability isCVE-2013-3183.[9]In 2020, another bug (CVE-2020-16898) in ICMPv6 was found aroundRouter Advertisement, which could even lead toremote code execution.[10]
https://en.wikipedia.org/wiki/Ping_of_death
AGoogle matrixis a particularstochastic matrixthat is used byGoogle'sPageRankalgorithm. The matrix represents a graph with edges representing links between pages. The PageRank of each page can then be generated iteratively from the Google matrix using thepower method. However, in order for the power method to converge, the matrix must be stochastic,irreducibleandaperiodic. In order to generate the Google matrixG, we must first generate anadjacency matrixAwhich represents the relations between pages or nodes. Assuming there areNpages, we can fill outAby doing the following: Then the final Google matrix G can be expressed viaSas: By the construction the sum of all non-negative elements inside each matrix column is equal to unity. The numerical coefficientα{\displaystyle \alpha }is known as a damping factor. UsuallySis asparse matrixand for modern directed networks it has only about ten nonzero elements in a line or column, thus only about 10Nmultiplications are needed to multiply a vector by matrixG.[2][3] An example of the matrixS{\displaystyle S}construction via Eq.(1) within a simple network is given in the articleCheiRank. For the actual matrix, Google uses a damping factorα{\displaystyle \alpha }around 0.85.[2][3][4]The term(1−α){\displaystyle (1-\alpha )}gives a surfer probability to jump randomly on any page. The matrixG{\displaystyle G}belongs to the class ofPerron-Frobenius operatorsofMarkov chains.[2]The examples of Google matrix structure are shown in Fig.1 for Wikipedia articles hyperlink network in 2009 at small scale and in Fig.2 for University of Cambridge network in 2006 at large scale. For0<α<1{\displaystyle 0<\alpha <1}there is only one maximal eigenvalueλ=1{\displaystyle \lambda =1}with the corresponding right eigenvector which has non-negative elementsPi{\displaystyle P_{i}}which can be viewed as stationary probability distribution.[2]These probabilities ordered by their decreasing values give the PageRank vectorPi{\displaystyle P_{i}}with the PageRankKi{\displaystyle K_{i}}used by Google search to rank webpages. Usually one has for the World Wide Web thatP∝1/Kβ{\displaystyle P\propto 1/K^{\beta }}withβ≈0.9{\displaystyle \beta \approx 0.9}. The number of nodes with a given PageRank value scales asNP∝1/Pν{\displaystyle N_{P}\propto 1/P^{\nu }}with the exponentν=1+1/β≈2.1{\displaystyle \nu =1+1/\beta \approx 2.1}.[6][7]The left eigenvector atλ=1{\displaystyle \lambda =1}has constant matrix elements. With0<α{\displaystyle 0<\alpha }all eigenvalues move asλi→αλi{\displaystyle \lambda _{i}\rightarrow \alpha \lambda _{i}}except the maximal eigenvalueλ=1{\displaystyle \lambda =1}, which remains unchanged.[2]The PageRank vector varies withα{\displaystyle \alpha }but other eigenvectors withλi<1{\displaystyle \lambda _{i}<1}remain unchanged due to their orthogonality to the constant left vector atλ=1{\displaystyle \lambda =1}. The gap betweenλ=1{\displaystyle \lambda =1}and other eigenvalue being1−α≈0.15{\displaystyle 1-\alpha \approx 0.15}gives a rapid convergence of a random initial vector to the PageRank approximately after 50 multiplications onG{\displaystyle G}matrix. Atα=1{\displaystyle \alpha =1}the matrixG{\displaystyle G}has generally many degenerate eigenvaluesλ=1{\displaystyle \lambda =1}(see e.g. [6][8]). Examples of the eigenvalue spectrum of the Google matrix of various directed networks is shown in Fig.3 from[5]and Fig.4 from.[8] The Google matrix can be also constructed for the Ulam networks generated by the Ulam method [8] for dynamical maps. The spectral properties of such matrices are discussed in [9,10,11,12,13,15].[5][9]In a number of cases the spectrum is described by the fractal Weyl law [10,12]. The Google matrix can be constructed also for other directed networks, e.g. for the procedure call network of the Linux Kernel software introduced in [15]. In this case the spectrum ofλ{\displaystyle \lambda }is described by the fractal Weyl law with the fractal dimensiond≈1.3{\displaystyle d\approx 1.3}(see Fig.5 from[9]).Numerical analysisshows that the eigenstates of matrixG{\displaystyle G}are localized (see Fig.6 from[9]).Arnoldi iterationmethod allows to compute many eigenvalues and eigenvectors for matrices of rather large size [13].[5][9] Other examples ofG{\displaystyle G}matrix include the Google matrix of brain [17] and business process management [18], see also.[1]Applications of Google matrix analysis to DNA sequences is described in [20]. Such a Google matrix approach allows also to analyze entanglement of cultures via ranking of multilingual Wikipedia articles abouts persons [21] The Google matrix with damping factor was described bySergey BrinandLarry Pagein 1998 [22], see also articles on PageRank history [23],[24].
https://en.wikipedia.org/wiki/Google_matrix
TheGolden Shield Project(Chinese:金盾工程;pinyin:jīndùn gōngchéng), also namedNational Public Security Work Informational Project,[a]is theChinesenationwide network-security fundamental constructional project by thee-governmentof thePeople's Republic of China. This project includes a security management information system, a criminal information system, an exit and entry administration information system, a supervisor information system, a traffic management information system, among others.[1][non-primary source needed] The Golden Shield Project is one of the 12 important"golden" projects. The other "golden" projects are Golden Customs (also known as Golden Gate) (for customs), Golden Tax (for taxation), Golden Macro, Golden Finance (for financial management), Golden Auditing, Golden Security, Golden Agriculture (for agricultural information), Golden Quality (for quality supervision), Golden Water (for water conservancy information), Golden Credit, and Golden Discipline projects.[2][b][3][non-primary source needed] The Golden Shield Project also manages the Bureau of Public Information and Network Security Supervision,[c]which is a bureau that is widely believed, though not officially claimed, to operate a subproject called theGreat Firewall of China(GFW)[d][4]which is acensorshipandsurveillanceproject that blocks data from foreign countries that may be unlawful in the PRC. It is operated by theMinistry of Public Security(MPS) of thegovernment of China. This subproject was initiated in 1998 and began operations in November 2003.[5]It has also seemingly been used to attack international web sites usingMan-on-the-sideDDoS, for exampleGitHub on 2015/03/28.[6] The political and ideological background of the Golden Shield Project is considered to be one ofDeng Xiaoping's favorite sayings in the early 1980s: "If you open the window for fresh air, you have to expect some flies to blow in."[e]The saying is related to a period of economic reform in China that became known as the "socialist market economy". Superseding thepolitical ideologiesof theCultural Revolution, the reform led China towards amarket economyand opened up the market for foreign investors. Nonetheless, despite the economic freedom, values and political ideas of theChinese Communist Partyhave had to be protected by "swatting flies" of other unwanted ideologies.[7] TheInternet in Chinaarrived in 1994,[8]as the inevitable consequence of and supporting tool for the "socialist market economy". As availability of the Internet has gradually increased, it has become a common communication platform and tool for trading information. The Ministry of Public Security took initial steps to control Internet use in 1997, when it issued comprehensive regulations governing its use. The key sections, Articles 4–6, are the following: Individuals are prohibited from using the Internet to: harm national security; disclose state secrets; or injure the interests of the state or society. Users are prohibited from using the Internet to create, replicate, retrieve, or transmit information that incites resistance to the PRC Constitution, laws, or administrative regulations; promotes the overthrow of the government or socialist system; undermines national unification; distorts the truth, spreads rumors, or destroys social order; or provides sexually suggestive material or encourages gambling, violence, or murder. Users are prohibited from engaging in activities that harm the security of computer information networks and from using networks or changing network resources without prior approval.[9] In 1998, the Chinese Communist Party feared that theChina Democracy Party(CDP) would breed a powerful new network that the party elites might not be able to control.[10]The CDP was immediately banned, followed by arrests and imprisonment.[11]That same year, the Golden Shield project was started. The first part of the project lasted eight years and was completed in 2006. The second part began in 2006 and ended in 2008. On 6 December 2002, 300 people in charge of the Golden Shield project from 31provincesandcitiesthroughout China participated in a four-day inaugural "Comprehensive Exhibition on Chinese Information System".[12]At the exhibition, many western high-tech products, includingInternet security,video monitoringand humanface recognitionwere purchased. It is estimated that around 30,000-50,000 police are employed in this gigantic project.[13] A subsystem of the Golden Shield has been nicknamed "the Great Firewall" (防火长城) (a term that first appeared in a Wired magazine article in 1997)[14]in reference to its role as anetwork firewalland to the ancientGreat Wall of China. This part of the project includes the ability to block content by preventingIP addressesfrom being routed through and consists of standard firewalls andproxy serversat the sixInternetgateways.[15]The system also selectively engages inDNS cache poisoningwhen particular sites are requested. The government does not appear to be systematically examining Internet content, as this appears to be technically impractical.[16]Because of its disconnection from the larger world of IP routing protocols, the network contained within the Great Firewall has been described as "the Chinese autonomousrouting domain".[17] During the2008 Summer Olympics, Chinese officials told Internet providers to prepare to unblock access from certainInternet cafés, access jacks in hotel rooms and conference centers where foreigners were expected to work or stay.[18] The Golden Shield Project is distinct from theGreat Firewall(GFW), which has a different mission. The differences are listed below: Politically, Technically, The Golden Shield Project contains an integrated, multi-layered system, involving technical, administrative, public security, national security, publicity and many other departments. This project was planning to finish within five years, separated into two phases. The first phase of the project focused on the construction of the first-level, second-level, and the third-level information communication network, application database, shared platform, etc. The period was three years. According to theXinhua News Agency, since September 2003, thePublic Securitydepartment of China has recorded 96% of the population information of mainland China into the database. In other words, the information of 1.25 billion out of 1.3 billion people has recorded in the information database of thePublic Securitydepartment of China.[20]Within three years, phase I project has finished the first-level, second-level, and the third-level backbone network and access network. This network has covered public security organs at all levels. The grass-roots teams of public security organs have accessed to the backbone network with the coverage rate 90%, that is to say, every 100 police officers have 40 computers connected to the network of the phase I project. TheMinistry of Public Security of the People's Republic of Chinasaid that the phase I project had significantly enhanced the combat effectiveness of public security.[citation needed] Members participated in the phase I project includeTsinghua Universityfrom China, and some high-tech companies from the United States of America, the United Kingdom, Israel, etc.Cisco Systemsfrom the United States of America has provided massive hardware devices for this project, and therefore was criticized by some members of theUnited States Congress.[21]According to an internal Cisco document, Cisco viewed China's Great Firewall and its Internet censorship as an opportunity to expand its business with China.[22] According toChina Central Television, phase I cost 6.4 billionyuan. On 6 December 2002, there came the "2002 China Large Institutions Informationization Exhibition", 300 leaders from theMinistry of Public Security of the People's Republic of Chinaand from otherpublic security bureausof 31 provinces or municipalities attended the exhibition. There were many western high-tech products, including network security, video surveillance and face recognition.[23]It was estimated that about 30000 police officers have been employed to maintain the system. There was a multi-level system to track netizens violating the provisions. Netizens who want to use the internet in a cybercafé are required to show theirResident Identity Cards. If some violating event happened, the owner of the cybercafé can send the personal information to the police through the internet. It is called a public security automation system, but it is actually an integrated, multi-layered, internet blocking and monitoring system, involving the technical, administrative, public security, national security, publicity, etc. The features are known as: readable, listenable, and thinkable.[citation needed] The phase II project started in 2006. The main task was to enhance the terminal construction, and the public security business application system, trying to informatize of the public security work. The period was two years.[24] Based on the phase I project, phase II project expanded the information application types of public security business, and informationized further public security information. The key points of this project included application system construction, system integration, the expansion of information centre, and information construction in central and western provinces. The system of was planning to strengthen the integration, to share and analysis of information. It would greatly enhance the information for the public security work support.[24] Mainland ChineseInternet censorshipprograms have censored Web sites that include (among other things): Blocked web sites are indexed to a lesser degree, if at all, by some Chinesesearch engines. This sometimes has considerable impact on search results.[26] According toThe New York Times,Googlehas set up computer systems inside China that try to access Web sites outside the country. If a site is inaccessible, then it is added toGoogle China's blacklist.[27]However, once unblocked, the Web sites will be reindexed. Referring to Google's first-hand experience of the great firewall, there is some hope in the international community that it will reveal some of its secrets.Simon Davies, founder of London-based pressure groupPrivacy International, is now challenging Google to reveal the technology it once used at China's behest. "That way, we can understand the nature of the beast and, perhaps, develop circumvention measures so there can be an opening up of communications." "That would be a dossier of extraordinary importance to human rights," Davies says. Google has yet to respond to his call.[28] Because the Great Firewall blocks destination IP addresses and domain names and inspects the data being sent or received, a basic censorship circumvention strategy is to use proxy nodes and encrypt the data. Most circumvention tools combine these two mechanisms.[29] Reporters Without Borderssuspects that countries such asAustralia,[32][33][34]Cuba,Vietnam,ZimbabweandBelarushave obtained surveillance technology from China although the censorships in these countries are not much in comparison to China.[35] Since at least 2015, the RussianRoskomnadzoragency collaborates with Chinese Great Firewall security officials in implementing its data retention and filtering infrastructure.[36][37][38]Since the2022 Russian invasion of Ukraine, in order to combat disinformation and enforce thewar censorship law, Russia authorities began improving and widening the capabilities of this system.[39]
https://en.wikipedia.org/wiki/Golden_Shield_Project
Intelligence amplification(IA) (also referred to ascognitive augmentation,machine augmented intelligenceandenhanced intelligence) is the use ofinformation technologyin augmentinghuman intelligence. The idea was first proposed in the 1950s and 1960s bycyberneticsand earlycomputer pioneers. IA is sometimes contrasted with AI (artificial intelligence), that is, the project of building a human-like intelligence in the form of an autonomous technological system such as a computer or robot. AI has encountered many fundamental obstacles, practical as well as theoretical, which for IA seem moot, as it needs technology merely as an extra support for an autonomous intelligence that has already proven to function. Moreover, IA has a long history of success, since all forms of information technology, from the abacus to writing to the Internet, have been developed basically to extend theinformation processingcapabilities of the human mind (seeextended mindanddistributed cognition). The termintelligence amplification(IA) has enjoyed a wide currency sinceWilliam Ross Ashbywrote of "amplifying intelligence" in hisIntroduction to Cybernetics(1956). Related ideas were explicitly proposed as an alternative toArtificial IntelligencebyHao Wangfrom the early days ofautomatic theorem provers. ... "problem solving" is largely, perhaps entirely, a matter of appropriateselection. Take, for instance, any popular book of problems andpuzzles. Almost every one can be reduced to the form: out of a certain set, indicate one element. ... It is, in fact, difficult to think of a problem, either playful or serious, that does not ultimately require an appropriate selection as necessary and sufficient for its solution. It is also clear that many of thetests used for measuring "intelligence"are scored essentially according to the candidate's power of appropriate selection. ... Thus it is not impossible that what is commonly referred to as "intellectual power" may be equivalent to "power of appropriate selection". Indeed, if a talkingBlack Boxwere to show high power of appropriate selection in such matters—so that, when given difficult problems it persistently gave correct answers—we could hardly deny that it was showing the 'behavioral' equivalent of "high intelligence". If this is so, and as we know that power of selection can be amplified, it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done, for the gene-patterns do it every time they form a brain that grows up to be something better than the gene-pattern could have specified in detail. What is new is that we can now do it synthetically, consciously, deliberately. "Man-Computer Symbiosis" is a key speculative paper published in 1960 bypsychologist/computer scientistJ.C.R. Licklider, which envisions that mutually-interdependent, "living together", tightly-coupled human brains and computing machines would prove to complement each other's strengths to a high degree: Man-computersymbiosisis a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today. In Licklider's vision, many of the pure artificial intelligence systems envisioned at the time by over-optimistic researchers would prove unnecessary. (This paper is also seen by some historians as marking the genesis of ideas aboutcomputer networkswhich later blossomed into theInternet). Licklider's research was similar in spirit to hisDARPAcontemporary andprotégéDouglas Engelbart. Both men’s work helped expand the utility of computers beyond merecomputationalmachines by conceiving and demonstrating them as a primary interface for humans to process and manipulate information.[1] Engelbart reasoned that the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. He thus set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes forknowledge-work. Engelbart's philosophy and research agenda is most clearly and directly expressed in the 1962 research report:Augmenting Human Intellect: A Conceptual Framework[2]The concept of network augmented intelligence is attributed to Engelbart based on this pioneering work. Increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable. And by complex situations we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers--whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human feel for a situation usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids. In the same research report he addresses the term "Intelligence Amplification" as coined by Ashby, and reflects on how his proposed research relates.[3] Engelbart subsequently implemented these concepts in hisAugmented Human Intellect Research CenteratSRI International, developing essentially an intelligence amplifying system of tools (NLS) and co-evolving organizational methods, in full operational use by the mid-1960s within the lab. As intended,[4]his R&D team experienced increasing degrees of intelligence amplification, as both rigorous users and rapid-prototype developers of the system. For a sampling of research results, see their 1968Mother of All Demos. Howard Rheingoldworked atXerox PARCin the 1980s and was introduced to bothBob TaylorandDouglas Engelbart; Rheingold wrote about "mind amplifiers" in his 1985 book,Tools for Thought.[5]Andrews Samraj mentioned in "Skin-Close Computing and Wearable Technology" 2021, about Human augmentation by two varieties of cyborgs, namely, Hard cyborgs and Soft cyborgs. A humanoid walking machine is an example of the soft cyborg and a pace-maker is an example for augmenting human as a hard cyborg. Arnav Kapurworking atMITwrote about human-AI coalescence: how AI can be integrated into human condition as part of "human self": as a tertiary layer to the human brain to augment human cognition.[6]He demonstrates this using a peripheral nerve-computer interface,AlterEgo, which enables a human user to silently and internally converse with a personal AI.[7][8] In 2014 the technology of Artificial Swarm Intelligence was developed to amplify the intelligence of networked human groups using AI algorithms modeled on biological swarms. The technology enables small teams to make predictions, estimations and medical diagnoses at accuracy levels that significantly exceed natural human intelligence.[9][10][11][12] Shan Carter andMichael Nielsenintroduce the concept of artificial intelligence augmentation (AIA): the use of AI systems to help develop new methods for intelligence augmentation. They contrast cognitive outsourcing (AI as an oracle, able to solve some large class of problems with better-than-human performance) with cognitive transformation (changing the operations and representations we use to think).[13]A calculator is an example of the former; a spreadsheet of the latter. Ron Fulbright describes human cognitive augmentation in human/cog ensembles involving humans working in collaborative partnership with cognitive systems (called cogs). By working together, human/cog ensembles achieve results superior to those obtained by the humans working alone or the cognitive systems working alone. The human component of the ensemble is therefore cognitively augmented. The degree of augmentation depends on the proportion of the total amount of cognition done by the human and that done by the cog. Six Levels of Cognitive Augmentation have been identified:[14][15] Augmented intelligence has been a repeating theme inscience fiction. A positive view ofbrain implantsused to communicate with a computer as a form of augmented intelligence is seen inAlgis Budrys1976 novelMichaelmas. Fear that the technology will be misused by the government and military is an early theme. In the 1981 BBC serialThe Nightmare Manthe pilot of a high-tech mini submarine is linked to his craft via a brain implant but becomes a savage killer after ripping out the implant. Perhaps the most well known writer exploring themes of intelligence augmentation isWilliam Gibson, in work such as his 1981 story "Johnny Mnemonic", in which the title character has computer-augmented memory, and his 1984 novelNeuromancer, in whichcomputer hackersinterface through brain-computer interfaces to computer systems.Vernor Vinge, as discussed earlier, looked at intelligence augmentation as a possible route to thetechnological singularity, a theme which also appears in his fiction. Flowers for Algernonis an early example of augmented intelligence in science fiction literature.[16]First published as a short story in 1959, the plot concerns anintellectually disabledman who undergoes an experiment to increase his intelligence to genius levels. His rise and fall is detailed in his journal entries, which become more sophisticated as his intelligence increases.
https://en.wikipedia.org/wiki/Intelligence_amplification
Filing under sealis a procedure allowing sensitive or confidential information to be filed with a court without becoming a matter ofpublic record.[1]The court generally must give permission for the material to remain under seal.[2] Filing confidential documents "under seal" separated from the public records allowslitigantsto navigate the judicial system without compromising their confidentiality, at least until there is an affirmative decision by consent of the information's owner or by order of the court to publicize it.[2] When the document is filed under seal, it should have a clear indication for thecourt clerkto file it separately – most often by stamping words "Filed Under Seal" on the bottom of each page. The person making the filing should also provide instructions to the court clerk that the document needs to be filed "under seal". Courts often have specific requirements for these filings in their Local Rules.[3] Normally records should not be filed under seal without court permission.[3]However,Federal Rule of Civil Procedure5.2 allows a person making a redacted filing to also file an unredacted copy under seal.[4] Thislegal termarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Under_seal
Thefactored language model(FLM) is an extension of a conventionallanguage modelintroduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector ofkfactors:wi={fi1,...,fik}.{\displaystyle w_{i}=\{f_{i}^{1},...,f_{i}^{k}\}.}An FLM provides the probabilistic modelP(f|f1,...,fN){\displaystyle P(f|f_{1},...,f_{N})}where the prediction of a factorf{\displaystyle f}is based onN{\displaystyle N}parents{f1,...,fN}{\displaystyle \{f_{1},...,f_{N}\}}. For example, ifw{\displaystyle w}represents a word token andt{\displaystyle t}represents aPart of speechtag for English, the expressionP(wi|wi−2,wi−1,ti−1){\displaystyle P(w_{i}|w_{i-2},w_{i-1},t_{i-1})}gives a model for predicting current word token based on a traditionalNgrammodel as well as thePart of speechtag of the previous word. A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens andPart of speechin English, or morphological information (stems, root, etc.) in Arabic. LikeN-grammodels, smoothing techniques are necessary in parameter estimation. In particular, generalized back-off is used in training an FLM. Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Factored_language_model
1.0, Part 2 Datatypes (Recommendation),1.1, Part 1 Structures (Recommendation), XSD(XML Schema Definition), a recommendation of the World Wide Web Consortium (W3C), specifies how to formally describe the elements in an Extensible Markup Language (XML) document. It can be used by programmers to verify each piece of item content in a document, to assure it adheres to the description of the element it is placed in.[1] Like allXML schema languages, XSD can be used to express a set of rules to which an XML document must conform to be considered "valid" according to that schema. However, unlike most other schema languages, XSD was also designed with the intent that determination of a document's validity would produce a collection of information adhering to specificdata types. Such a post-validationinfosetcan be useful in the development of XML document processing software. XML Schema, published as aW3C recommendationin May 2001,[2]is one of severalXML schema languages. It was the first separate schema language forXMLto achieve Recommendation status by the W3C. Because of confusion between XML Schema as a specific W3C specification, and the use of the same term to describe schema languages in general, some parts of the user community referred to this language asWXS, an initialism for W3C XML Schema, while others referred to it asXSD, an initialism for XML Schema Definition.[3][4]In Version 1.1 the W3C has chosen to adopt XSD as the preferred name, and that is the name used in this article. In its appendix of references, the XSD specification acknowledges the influence ofDTDsand other early XML schema efforts such asDDML,SOX, XML-Data, andXDR. It has adopted features from each of these proposals but is also a compromise among them. Of those languages, XDR and SOX continued to be used and supported for a while after XML Schema was published. A number ofMicrosoftproducts supported XDR until the release ofMSXML6.0 (which dropped XDR in favor of XML Schema) in December 2006.[5]Commerce One, Inc. supported its SOX schema language until declaring bankruptcy in late 2004. The most obvious features offered in XSD that are not available in XML's nativeDocument Type Definitions(DTDs) arenamespaceawareness and datatypes, that is, the ability to define element and attribute content as containing values such as integers and dates rather than arbitrary text. The XSD 1.0 specification was originally published in 2001, with a second edition following in 2004 to correct large numbers of errors. XSD 1.1 became aW3C RecommendationinApril 2012. Technically, aschemais an abstract collection of metadata, consisting of a set ofschema components: chiefly element and attribute declarations and complex and simple type definitions. These components are usually created by processing a collection ofschema documents, which contain the source language definitions of these components. In popular usage, however, a schema document is often referred to as a schema. Schema documents are organized by namespace: all the named schema components belong to a target namespace, and the target namespace is a property of the schema document as a whole. A schema document mayincludeother schema documents for the same namespace, and mayimportschema documents for a different namespace. When an instance document is validated against a schema (a process known asassessment), the schema to be used for validation can either be supplied as a parameter to the validation engine, or it can be referenced directly from the instance document using two special attributes,xsi:schemaLocationandxsi:noNamespaceSchemaLocation. (The latter mechanism requires the client invoking validation to trust the document sufficiently to know that it is being validated against the correct schema. "xsi" is the conventional prefix for the namespace "http://www.w3.org/2001/XMLSchema-instance".) XML Schema Documents usually have the filename extension ".xsd". A uniqueInternet Media Typeis not yet registered for XSDs, so "application/xml" or "text/xml" should be used, as per RFC 3023. The main components of a schema are: Other more specialized components include annotations, assertions, notations, and theschema componentwhich contains information about the schema as a whole. Simple types (also called data types) constrain the textual values that may appear in an element or attribute. This is one of the more significant ways in which XML Schema differs from DTDs. For example, an attribute might be constrained to hold only a valid date or a decimal number. XSD provides a set of 19primitive data types(anyURI,base64Binary,boolean,date,dateTime,decimal,double,duration,float,hexBinary,gDay,gMonth,gMonthDay,gYear,gYearMonth,NOTATION,QName,string, andtime). It allows new data types to be constructed from these primitives by three mechanisms: Twenty-five derived types are defined within the specification itself, and further derived types can be defined by users in their own schemas. The mechanisms available for restricting data types include the ability to specify minimum and maximum values, regular expressions, constraints on the length of strings, and constraints on the number of digits in decimal values. XSD 1.1 again adds assertions, the ability to specify an arbitrary constraint by means of anXPath 2.0expression. Complex types describe the permitted content of an element, including its element and text children and its attributes. A complex type definition consists of a set of attribute uses and a content model. Varieties of content model include: A complex type can be derived from another complex type by restriction (disallowing some elements, attributes, or values that the base type permits) or by extension (allowing additional attributes and elements to appear). In XSD 1.1, a complex type may be constrained by assertions—XPath 2.0expressions evaluated against the content that must evaluate to true. After XML Schema-based validation, it is possible to express an XML document's structure and content in terms of thedata modelthat was implicit during validation. The XML Schema data model includes: This collection of information is called the Post-Schema-Validation Infoset (PSVI). The PSVI gives a valid XML document its "type" and facilitates treating the document as an object, usingobject-oriented programming(OOP) paradigms. The primary reason for defining an XML schema is to formally describe an XML document; however the resulting schema has a number of other uses that go beyond simple validation. The schema can be used to generate code, referred to asXML Data Binding. This code allows contents of XML documents to be treated as objects within the programming environment. The schema can be used to generate human-readable documentation of an XML file structure; this is especially useful where the authors have made use of the annotation elements. No formal standard exists for documentation generation, but a number of tools are available, such as theXs3pstylesheet, that will produce high-quality readable HTML and printed material. Although XML Schema is successful in that it has been widely adopted and largely achieves what it set out to, it has been the subject of a great deal of severe criticism, perhaps more so than any other W3C Recommendation. Good summaries of the criticisms are provided by James Clark,[6]Anders Møller and Michael Schwartzbach,[7]Rick Jelliffe[8]and David Webber.[9] General problems: Practical limitations of expressibility: Technical problems: XSD 1.1 became aW3C RecommendationinApril 2012, which means it is an approved W3C specification. Significant new features in XSD 1.1 are: Until the Proposed Recommendation draft, XSD 1.1 also proposed the addition of a new numeric data type, precisionDecimal. This proved controversial, and was therefore dropped from the specification at a late stage of development. W3C XML Schema 1.0 Specification W3C XML Schema 1.1 Specification Other
https://en.wikipedia.org/wiki/W3C_XML_Schema
Clusteringcan refer to the following: Incomputing: Ineconomics: Ingraph theory:
https://en.wikipedia.org/wiki/Clustering_(disambiguation)
Incomputational complexity theory,Karp's 21 NP-complete problemsare a set ofcomputational problemswhich areNP-complete. In his 1972 paper, "Reducibility Among Combinatorial Problems",[1]Richard KarpusedStephen Cook's 1971 theorem that theboolean satisfiability problemis NP-complete[2](also called theCook–Levin theorem) to show that there is apolynomial timemany-one reductionfrom the boolean satisfiability problem to each of 21combinatorialandgraph theoreticalcomputational problems, thereby showing that they are all NP-complete. This was one of the first demonstrations that many natural computational problems occurring throughoutcomputer sciencearecomputationally intractable, and it drove interest in the study of NP-completeness and theP versus NP problem. Karp's 21 problems are shown below, many with their original names. The nesting indicates the direction of the reductions used. For example,Knapsackwas shown to be NP-complete by reducingExact covertoKnapsack. As time went on it was discovered that many of the problems can be solved efficiently if restricted to special cases, or can be solved within any fixed percentage of the optimal result. However,David Zuckermanshowed in 1996 that every one of these 21 problems has a constrained optimization version that is impossible to approximate within any constant factor unless P = NP, by showing that Karp's approach to reduction generalizes to a specific type of approximability reduction.[3]However, these may be different from the standard optimization versions of the problems, which may have approximation algorithms (as in the case of maximum cut).
https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems
Inlinguistics, thebrevity law(also calledZipf's law of abbreviation) is a linguistic law that qualitatively states that the more frequently a word is used, the shorter that word tends to be, and vice versa; the less frequently a word is used, the longer it tends to be.[1]This is astatistical regularitythat can be found innatural languagesand other natural systems and that claims to be a general rule. The brevity law was originally formulated by the linguistGeorge Kingsley Zipfin 1945 as anegative correlationbetween the frequency of a word and its size. He analyzed awritten corpusinAmerican Englishand showed that the average lengths in terms of the average number ofphonemesfell as the frequency of occurrence increased. Similarly, in aLatincorpus, he found a negative correlation between the number ofsyllablesin a word and the frequency of its appearance. This observation says that the most frequent words in a language are the shortest, e.g. themost common words in Englishare:the, be (in different forms), to, of, and, a; all containing 1 to 3 phonemes. He claimed that this Law of Abbreviation is a universal structural property of language, hypothesizing that it arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently.[2][3] Since then, the law has been empirically verified for almost a thousand languages of 80 differentlinguistic familiesfor the relationship between the number oflettersin a written word & itsfrequency in text.[4]The Brevity law appears universal and has also been observed acoustically when word size is measured in terms of word duration.[5]2016 evidence suggests it holds in theacoustic communicationof other primates.[6] The origin of this statistical pattern seems to be related to optimization principles and derived by a mediation between two major constraints: the pressure to reduce the cost of production and the pressure to maximize transmission success. This idea is very related with theprinciple of least effort, which postulates that efficiency selects a path of least resistance or "effort". This principle of reducing the cost of production might also be related to principles of optimaldata compressionininformation theory.[7]
https://en.wikipedia.org/wiki/Brevity_law
Algebraicgraph theoryis a branch ofmathematicsin whichalgebraicmethods are applied to problems aboutgraphs. This is in contrast togeometric,combinatoric, oralgorithmicapproaches. There are three main branches of algebraic graph theory, involving the use oflinear algebra, the use ofgroup theory, and the study ofgraph invariants. The first branch of algebraic graph theory involves the study of graphs in connection withlinear algebra. Especially, it studies thespectrumof theadjacency matrix, or theLaplacian matrixof a graph (this part of algebraic graph theory is also calledspectral graph theory). For thePetersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to othergraph properties. As a simple example, aconnectedgraph withdiameterDwill have at leastD+1 distinct values in its spectrum.[1]Aspectsof graph spectra have been used in analysing thesynchronizabilityofnetworks. The second branch of algebraic graph theory involves the study of graphs in connection togroup theory, particularlyautomorphism groupsandgeometric group theory. The focus is placed on various families of graphs based onsymmetry(such assymmetric graphs,vertex-transitive graphs,edge-transitive graphs,distance-transitive graphs,distance-regular graphs, andstrongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough thatlistsof graphs can be drawn up. ByFrucht's theorem, allgroupscan be represented as the automorphism group of a connected graph (indeed, of acubic graph).[2]Another connection with group theory is that, given any group, symmetrical graphs known asCayley graphscan be generated, and these have properties related to the structure of the group.[1] This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum. In particular, the spectrum of a highly symmetrical graph, such as the Petersen graph, has few distinct values[1](the Petersen graph has 3, which is the minimum possible, given its diameter). For Cayley graphs, the spectrum can be related directly to the structure of the group, in particular to itsirreducible characters.[1][3] Finally, the third branch of algebraic graph theory concerns algebraic properties ofinvariantsof graphs, and especially thechromatic polynomial, theTutte polynomialandknot invariants. The chromatic polynomial of a graph, for example, counts the number of its propervertex colorings. For the Petersen graph, this polynomial ist(t−1)(t−2)(t7−12t6+67t5−230t4+529t3−814t2+775t−352){\displaystyle t(t-1)(t-2)(t^{7}-12t^{6}+67t^{5}-230t^{4}+529t^{3}-814t^{2}+775t-352)}.[1]In particular, this means that the Petersen graph cannot be properly colored with one or two colors, but can be colored in 120 different ways with 3 colors. Much work in this area of algebraic graph theory was motivated by attempts to prove thefour color theorem. However, there are still manyopen problems, such as characterizing graphs which have the same chromatic polynomial, and determining which polynomials are chromatic.
https://en.wikipedia.org/wiki/Algebraic_graph_theory
Inmathematics,Farkas' lemmais a solvability theorem for a finitesystemoflinear inequalities. It was originally proven by the Hungarian mathematicianGyula Farkas.[1]Farkas'lemmais the key result underpinning thelinear programmingduality and has played a central role in the development ofmathematical optimization(alternatively,mathematical programming). It is used amongst other things in the proof of theKarush–Kuhn–Tucker theoreminnonlinear programming.[2]Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set ofBell inequalitiesin the form of necessary and sufficient conditions for the existence of alocal hidden-variable theory, given data from any specific set of measurements.[3] Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities,[4]i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theorem stating that exactly one of two systems has a solution.[5] There are a number of slightly different (but equivalent) formulations of the lemma in the literature. The one given here is due to Gale, Kuhn and Tucker (1951).[6] Farkas' lemma—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two assertions is true: Here, the notationx≥0{\displaystyle \mathbf {x} \geq 0}means that all components of the vectorx{\displaystyle \mathbf {x} }are nonnegative. Letm,n= 2,A=[6430],{\displaystyle \mathbf {A} ={\begin{bmatrix}6&4\\3&0\end{bmatrix}},}andb=[b1b2].{\displaystyle \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\end{bmatrix}}.}The lemma says that exactly one of the following two statements must be true (depending onb1andb2): Here is a proof of the lemma in this special case: Consider theclosedconvex coneC(A){\displaystyle C(\mathbf {A} )}spanned by the columns ofA; that is, Observe thatC(A){\displaystyle C(\mathbf {A} )}is the set of the vectorsbfor which the first assertion in the statement of Farkas' lemma holds. On the other hand, the vectoryin the second assertion is orthogonal to ahyperplanethat separatesbandC(A).{\displaystyle C(\mathbf {A} ).}The lemma follows from the observation thatbbelongs toC(A){\displaystyle C(\mathbf {A} )}if and only ifthere is no hyperplane that separates it fromC(A).{\displaystyle C(\mathbf {A} ).} More precisely, leta1,…,an∈Rm{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}\in \mathbb {R} ^{m}}denote the columns ofA. In terms of these vectors, Farkas' lemma states that exactly one of the following two statements is true: The sumsx1a1+⋯+xnan{\displaystyle x_{1}\mathbf {a} _{1}+\dots +x_{n}\mathbf {a} _{n}}with nonnegative coefficientsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}form the cone spanned by the columns ofA. Therefore, the first statement tells thatbbelongs toC(A).{\displaystyle C(\mathbf {A} ).} The second statement tells that there exists a vectorysuch that the angle ofywith the vectorsaiis at most 90°, while the angle ofywith the vectorbis more than 90°. The hyperplane normal to this vector has the vectorsaion one side and the vectorbon the other side. Hence, this hyperplane separates the cone spanned bya1,…,an{\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}}from the vectorb. For example, letn,m= 2,a1= (1, 0)T, anda2= (1, 1)T. The convex cone spanned bya1anda2can be seen as a wedge-shaped slice of the first quadrant in thexyplane. Now, supposeb= (0, 1). Certainly,bis not in the convex conea1x1+a2x2. Hence, there must be a separating hyperplane. Lety= (1, −1)T. We can see thata1·y= 1,a2·y= 0, andb·y= −1. Hence, the hyperplane with normalyindeed separates the convex conea1x1+a2x2fromb. A particularly suggestive and easy-to-remember version is the following: if a set of linear inequalities has no solution, then a contradiction can be produced from it by linear combination with nonnegative coefficients. In formulas: ifAx≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is unsolvable theny⊤A=0,{\displaystyle \mathbf {y} ^{\top }\mathbf {A} =0,}y⊤b=−1,{\displaystyle \mathbf {y} ^{\top }\mathbf {b} =-1,}y≥0{\displaystyle \mathbf {y} \geq 0}has a solution.[7]Note thaty⊤A{\displaystyle \mathbf {y} ^{\top }\mathbf {A} }is a combination of the left-hand sides,y⊤b{\displaystyle \mathbf {y} ^{\top }\mathbf {b} }a combination of the right-hand side of the inequalities. Since the positive combination produces a zero vector on the left and a −1 on the right, the contradiction is apparent. Thus, Farkas' lemma can be viewed as a theorem oflogical completeness:Ax≤b{\displaystyle \mathbf {Ax} \leq \mathbf {b} }is a set of "axioms", the linear combinations are the "derivation rules", and the lemma says that, if the set of axioms is inconsistent, then it can be refuted using the derivation rules.[8]: 92–94 Farkas' lemma implies that thedecision problem"Given asystem of linear equations, does it have a non-negative solution?" is in the intersection ofNPandco-NP. This is because, according to the lemma, both a "yes" answer and a "no" answer have a proof that can be verified in polynomial time. The problems in the intersectionNP∩coNP{\displaystyle NP\cap coNP}are also calledwell-characterized problems. It is a long-standing open question whetherNP∩coNP{\displaystyle NP\cap coNP}is equal toP. In particular, the question of whether a system of linear equations has a non-negative solution was not known to be in P, until it was proved using theellipsoid method.[9]: 25 The Farkas Lemma has several variants with different sign constraints (the first one is the original version):[8]: 92 The latter variant is mentioned for completeness; it is not actually a "Farkas lemma" since it contains only equalities. Its proof is anexercise in linear algebra. There are also Farkas-like lemmas forintegerprograms.[9]: 12--14For systems of equations, the lemma is simple: For system of inequalities, the lemma is much more complicated. It is based on the following tworules of inference: The lemma says that: The variants are summarized in the table below. Generalized Farkas' lemma—LetA∈Rm×n,{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n},}b∈Rm,{\displaystyle \mathbf {b} \in \mathbb {R} ^{m},}S{\displaystyle \mathbf {S} }is a closed convex cone inRn,{\displaystyle \mathbb {R} ^{n},}and thedual coneofS{\displaystyle \mathbf {S} }isS∗={z∈Rn∣z⊤x≥0,∀x∈S}.{\displaystyle \mathbf {S} ^{*}=\{\mathbf {z} \in \mathbb {R} ^{n}\mid \mathbf {z} ^{\top }\mathbf {x} \geq 0,\forall \mathbf {x} \in \mathbf {S} \}.}If convex coneC(A)={Ax∣x∈S}{\displaystyle C(\mathbf {A} )=\{\mathbf {A} \mathbf {x} \mid \mathbf {x} \in \mathbf {S} \}}is closed, then exactly one of the following two statements is true: Generalized Farkas' lemma can be interpreted geometrically as follows: either a vector is in a given closedconvex cone, or there exists ahyperplaneseparating the vector from the cone; there are no other possibilities. The closedness condition is necessary, seeSeparation theorem IinHyperplane separation theorem. For original Farkas' lemma,S{\displaystyle \mathbf {S} }is the nonnegative orthantR+n,{\displaystyle \mathbb {R} _{+}^{n},}hence the closedness condition holds automatically. Indeed, for polyhedral convex cone, i.e., there exists aB∈Rn×k{\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times k}}such thatS={Bx∣x∈R+k},{\displaystyle \mathbf {S} =\{\mathbf {B} \mathbf {x} \mid \mathbf {x} \in \mathbb {R} _{+}^{k}\},}the closedness condition holds automatically. Inconvex optimization, various kinds of constraint qualification, e.g.Slater's condition, are responsible for closedness of the underlying convex coneC(A).{\displaystyle C(\mathbf {A} ).} By settingS=Rn{\displaystyle \mathbf {S} =\mathbb {R} ^{n}}andS∗={0}{\displaystyle \mathbf {S} ^{*}=\{0\}}in generalized Farkas' lemma, we obtain the following corollary about the solvability for a finite system of linear equalities: Corollary—LetA∈Rm×n{\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}andb∈Rm.{\displaystyle \mathbf {b} \in \mathbb {R} ^{m}.}Then exactly one of the following two statements is true: Farkas' lemma can be varied to many further theorems of alternative by simple modifications,[5]such asGordan's theorem: EitherAx<0{\displaystyle \mathbf {Ax} <0}has a solutionx, orA⊤y=0{\displaystyle \mathbf {A} ^{\top }\mathbf {y} =0}has a nonzero solutionywithy≥ 0. Common applications of Farkas' lemma include proving thestrong duality theorem associated with linear programmingand theKarush–Kuhn–Tucker conditions. An extension of Farkas' lemma can be used to analyze the strong duality conditions for and construct the dual of a semidefinite program. It is sufficient to prove the existence of the Karush–Kuhn–Tucker conditions using theFredholm alternativebut for the condition to be necessary, one must apply von Neumann'sminimax theoremto show the equations derived by Cauchy are not violated. This is used forDill'sReluplex method for verifying deep neural networks.
https://en.wikipedia.org/wiki/Farkas%27s_lemma
Ambiguityis the type ofmeaningin which aphrase, statement, or resolution is not explicitly defined, making for several interpretations; others describe it as a concept or statement that has no real reference. A common aspect of ambiguity isuncertainty. It is thus anattributeof any idea or statement whoseintendedmeaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (Theprefixambi-reflects the idea of "two", as in "two meanings"). The concept of ambiguity is generally contrasted withvagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity. Lexical ambiguity is contrasted withsemantic ambiguity.[citation needed]The former represents a choice between a finite number of known and meaningfulcontext-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related tovagueness. Ambiguity in human language is argued to reflect principles of efficient communication.[2][3]Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system that is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system. Linguistic ambiguitycan be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance. Thelexical ambiguityof a word or phrase applies to it having more than one meaning in the language to which the word belongs.[4]"Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy). The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I put $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer. Lexical ambiguity can be addressed byalgorithmicmethods that automatically associate the appropriate meaning with a word in context, a task referred to asword-sense disambiguation. The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" andobfuscationare necessary to gain support from multipleconstituentswithmutually exclusiveconflicting desires from his or her candidate of choice. Ambiguity is a powerful tool ofpolitical science. More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good personversusthe lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to applyprefixesandsuffixescan also create ambiguity ("unlockable" can mean "capable of being opened" or "impossible to lock"). Semantic ambiguityoccurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either Syntactic ambiguityarises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity.[5]For the notion of, and theoretic results about, syntactic ambiguity in artificial,formal languages(such as computerprogramming languages), seeAmbiguous grammar. Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used asvocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?" Spoken languagecan contain many more types of ambiguities that are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called amondegreen. Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of aglittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases. Incontinental philosophy(particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition.Martin Heideggerargued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology,Daseinis always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology"Jean-Paul Sartrefollows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity.Simone de Beauvoirtries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. FollowingErnest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity. In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness). In the narrative, ambiguity can be introduced in several ways: motive, plot, character.F. Scott Fitzgeralduses the latter type of ambiguity with notable effect in his novelThe Great Gatsby. Mathematical notationis a helpful tool that eliminates a lot of misunderstandings associated with natural language inphysicsand othersciences. Nonetheless, there are still some inherent ambiguities due tolexical,syntactic, andsemanticreasons that persist in mathematical notation. Theambiguityin the style of writing afunctionshould not be confused with amultivalued function, which can (and should) be defined in a deterministic and unambiguous way. Severalspecial functionsstill do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions: Ambiguous expressions often appear in physical and mathematical texts. It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example,f=f(x){\displaystyle f=f(x)}.Then, if one seesf=f(y+1){\displaystyle f=f(y+1)},there is no way to distinguish whether it meansf=f(x){\displaystyle f=f(x)}multipliedby(y+1){\displaystyle (y+1)},or functionf{\displaystyle f}evaluatedat argument equal to(y+1){\displaystyle (y+1)}.In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning. Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++andFortran) require the character * as a symbol of multiplication. TheWolfram Languageused inMathematicaallows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expressionf=f(x){\displaystyle f=f(x)}is qualified as an error. The order of operations may depend on the context. In mostprogramming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example,a/bc{\displaystyle a/bc}is interpreted asa/(bc){\displaystyle a/(bc)};in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity. In thescientific journalstyle, one uses roman letters to denote elementary functions, whereas variables are written using italics. For example, in mathematical journals the expressionsin{\displaystyle sin}does not denote thesine function, but the product of the three variabless{\displaystyle s},i{\displaystyle i},n{\displaystyle n},although in the informal notation of a slide presentation it may stand forsin{\displaystyle \sin }. Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation. For example, in the notationTmnk{\displaystyle T_{mnk}},the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variablesm{\displaystyle m},n{\displaystyle n}andk{\displaystyle k},or it is an indication to a trivalenttensor. An expression such assin2⁡α/2{\displaystyle \sin ^{2}\alpha /2}can be understood to mean either(sin⁡(α/2))2{\displaystyle (\sin(\alpha /2))^{2}}or(sin⁡α)2/2{\displaystyle (\sin \alpha )^{2}/2}.Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writingsin2⁡(α/2){\displaystyle \sin ^{2}(\alpha /2)}or12sin2⁡α{\textstyle {\frac {1}{2}}\sin ^{2}\alpha }. The expressionsin−1⁡α{\displaystyle \sin ^{-1}\alpha }meansarcsin⁡(α){\displaystyle \arcsin(\alpha )}in several texts, though it might be thought to mean(sin⁡α)−1{\displaystyle (\sin \alpha )^{-1}},sincesinn⁡α{\displaystyle \sin ^{n}\alpha }commonly means(sin⁡α)n{\displaystyle (\sin \alpha )^{n}}.Conversely,sin2⁡α{\displaystyle \sin ^{2}\alpha }might seem to meansin⁡(sin⁡α){\displaystyle \sin(\sin \alpha )},as thisexponentiationnotation usually denotesfunction iteration: in general,f2(x){\displaystyle f^{2}(x)}meansf(f(x)){\displaystyle f(f(x))}.However, fortrigonometricandhyperbolic functions, this notation conventionally means exponentiation of the result of function application. The expressiona/2b{\displaystyle a/2b}can be interpreted as meaning(a/2)b{\displaystyle (a/2)b};however, it is more commonly understood to meana/(2b){\displaystyle a/(2b)}. It is common to define thecoherent statesinquantum opticswith|α⟩{\displaystyle ~|\alpha \rangle ~}and states with fixed number of photons with|n⟩{\displaystyle ~|n\rangle ~}.Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, andn{\displaystyle n}-photon state if the Latin characters dominate. The ambiguity becomes even worse, if|x⟩{\displaystyle ~|x\rangle ~}is used for the states with certain value of the coordinate, and|p⟩{\displaystyle ~|p\rangle ~}means the state with certain value of the momentum, which may be used in books onquantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional,dimensionlessvariables are used. Expression|1⟩{\displaystyle |1\rangle }may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context. Some physical quantities do not yet have established notations; their value (and sometimes evendimension, as in the case of theEinstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just likeLudwig Wittgensteinstates inTractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning."[7] A highly confusing term isgain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing. The termintensityis ambiguous when applied to light. The term can refer to any ofirradiance,luminous intensity,radiant intensity, orradiance, depending on the background of the person using the term. Also, confusions may be related with the use ofatomic percentas measure of concentration of adopant, orresolutionof an imaging system, as measure of the size of the smallest detail that still can be resolved at the background of statistical noise. See alsoAccuracy and precision. TheBerry paradoxarises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise tovicious circlefallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal.[8] In mathematics and logic, ambiguity can be considered to be an instance of the logical concept ofunderdetermination—for example,X=Y{\displaystyle X=Y}leaves open what the value ofX{\displaystyle X}is—while overdetermination, except when likeX=1,X=1,X=1{\displaystyle X=1,X=1,X=1}, is aself-contradiction, also calledinconsistency,paradoxicalness, oroxymoron, or in mathematics aninconsistent system—such asX=2,X=3{\displaystyle X=2,X=3},which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity andimpossible objects, such as the Necker cube and impossible cube, or many of the drawings ofM. C. Escher.[9] Somelanguages have been createdwith the intention of avoiding ambiguity, especiallylexical ambiguity.LojbanandLoglanare two related languages that have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions tosyntaxandsemanticrules are time-consuming and difficult to learn. Instructural biology, ambiguity has been recognized as a problem for studyingprotein conformations.[10]The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits calleddomains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments. ChristianityandJudaismemploy the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorseRudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery that fascinates humans.[dubious–discuss]TheapocryphalBook of Judithis noted for the "ingenious ambiguity"[11]expressed by its heroine; for example, she says to the villain of the story,Holofernes, "my lord will not fail to achieve his purposes", without specifying whethermy lordrefers to the villain or to God.[12][13] The orthodox Catholic writerG. K. Chestertonregularly employed paradox to tease out the meanings in common concepts that he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books,Orthodoxy(1908), itself employed such a paradox.[14] Inmusic, pieces or sections that confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as somepolytonality,polymeter, other ambiguousmetersorrhythms, and ambiguousphrasing, or (Stein 2005, p. 79) anyaspect of music. Themusic of Africais often purposely ambiguous. To quoteSir Donald Francis Tovey(1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value." In visual art, certain images are visually ambiguous, such as theNecker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon calledmultistable perception. The opposite of suchambiguous imagesareimpossible objects.[15] Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance? Insocial psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to thebystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies.[16] Incomputer science, theSI prefixeskilo-,mega-andgiga-were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242and 10243) contrary to themetric systemin which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g.DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense. Subsequently, the Ki, Mi, and Gi prefixes were introduced so thatbinary prefixescould be written explicitly, also rendering k, M, and Gunambiguousin texts conforming to the new standard—this led to anewambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously1000000or1048576) islessuncertain than the engineering value1.0×106(defined to designate the interval950000to 1050000). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109and 1012bytes.
https://en.wikipedia.org/wiki/Ambiguity
This article lists the world's busiestcontainerports(ports withcontainer terminalsthat specialize in handling goods transported inintermodal shipping containers), by total number oftwenty-foot equivalent units(TEUs) transported through the port. The table lists volume in thousands of TEU per year. The vast majority of containers moved by large, ocean-faringcontainer shipsare 20-foot (1 TEU) and 40-foot (2 TEU)ISO-standardshipping containers, with 40-foot units outnumbering 20-foot units to such an extent that the actual number of containers moved is between 55%–60% of the number of TEUs counted.[1]
https://en.wikipedia.org/wiki/List_of_world%27s_busiest_container_ports
Obelismis the practice of annotatingmanuscriptswith marks set in the margins. Modern obelisms are used by editors whenproofreadinga manuscript or typescript. Examples are "stet" (which is Latin for "Let it stand", used in this context to mean "disregard the previous mark") and "dele" (for "Delete"). Theobelossymbol (seeobelus) gets its name from the spit, or sharp end of alanceinancient Greek. An obelos was placed by editors on the margins of manuscripts, especially inHomer, to indicate lines that may not have been written by Homer. The system was developed byAristarchusand notably used later byOrigenin hisHexapla. Origen marked spurious words with an opening obelus and a closing metobelos ("end of obelus").[1] There were many other suchshorthandsymbols, to indicate corrections, emendations, deletions, additions, and so on. Most used are the editorialcoronis, theparagraphos, the forked paragraphos, the reversed forked paragraphos, thehypodiastole, thedownwards ancora, theupwards ancora, and thedotted right-pointing angle, which is also known as thediple periestigmene. Loosely, all these symbols, and the act of annotation by means of them, areobelism. These nine ancient Greek textual annotation symbols are also included in the supplemental punctuation list ofISO/IEC 10646 standardfor character sets. Unicodeencodes the following: Some of these were also used inAncient Greek punctuationasword dividers.[2]The two-dot punctuation is used as a word separator inOld Turkic script. This article about theAncient Greeklanguage is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Obelism
Inmathematics, atopological gameis an infinite game ofperfect informationplayed between two players on atopological space. Players choose objects with topological properties such as points,open sets,closed setsandopen coverings. Time is generally discrete, but the plays may havetransfinitelengths, and extensions to continuum time have been put forth. The conditions for a player to win can involve notions liketopological closureandconvergence. It turns out that some fundamental topological constructions have a natural counterpart in topological games; examples of these are theBaire property,Baire spaces, completeness and convergence properties, separation properties, covering and base properties, continuous images, Suslin sets, and singular spaces. At the same time, some topological properties that arise naturally in topological games can be generalized beyond agame-theoreticcontext: by virtue of this duality, topological games have been widely used to describe new properties of topological spaces, and to put known properties under a different light. There are also close links withselection principles. The termtopological gamewas first introduced byClaude Berge,[1][2][3]who defined the basic ideas and formalism in analogy withtopological groups. A different meaning fortopological game, the concept of “topological properties defined by games”, was introduced in the paper of Rastislav Telgársky,[4]and later "spaces defined by topological games";[5]this approach is based on analogies with matrix games,differential gamesand statistical games, and defines and studies topological games within topology. After more than 35 years, the term “topological game” became widespread, and appeared in several hundreds of publications. The survey paper of Telgársky[6]emphasizes the origin of topological games from theBanach–Mazur game. There are two other meanings of topological games, but these are used less frequently. Many frameworks can be defined for infinitepositional gamesof perfect information. The typical setup is a game between two players,IandII, who alternately pick subsets of a topological spaceX. In thenth round, playerIplays a subsetInofX, and player II responds with a subsetJn. There is a round for every natural numbern, and after all rounds are played, playerIwins if the sequence satisfies some property, and otherwise playerIIwins. The game is defined by the target property and the allowed moves at each step. For example, in theBanach–Mazur gameBM(X), the allowed moves are nonempty open subsets of the previous move, and playerIwins if⋂nIn≠∅{\displaystyle \bigcap _{n}I_{n}\neq \emptyset }. This typical setup can be modified in various ways. For example, instead of being a subset ofX, each move might consist of a pair(I,p){\displaystyle (I,p)}whereI⊆X{\displaystyle I\subseteq X}andp∈x{\displaystyle p\in x}. Alternatively, the sequence of moves might have length someordinal numberother thanω. The first topological game studied was the Banach–Mazur game, which is a motivating example of the connections between game-theoretic notions and topological properties. LetYbe a topological space, and letXbe a subset ofY, called thewinning set. PlayerIbegins the game by picking a nonempty open subsetI0⊆Y{\displaystyle I_{0}\subseteq Y}, and playerIIresponds with a nonempty open subsetJ0⊆I0{\displaystyle J_{0}\subseteq I_{0}}. Play continues in this fashion, with players alternately picking a nonempty open subset of the previous play. After an infinite sequence of moves, one for each natural number, the game is finished, andIwins if and only if The game-theoretic and topological connections demonstrated by the game include: Some other notable topological games are: Many more games have been introduced over the years, to study, among others: theKuratowskicoreduction principle; separation and reduction properties of sets in close projective classes;Luzinsieves; invariantdescriptive set theory;Suslin sets; theclosed graph theorem;webbed spaces; MP-spaces; theaxiom of choice;computable functions. Topological games have also been related to ideas inmathematical logic,model theory,infinitely-long formulas, infinite strings of alternating quantifiers,ultrafilters,partially ordered sets, and thechromatic numberof infinite graphs. For a longer list and a more detailed account see the 1987 survey paper of Telgársky.[6]
https://en.wikipedia.org/wiki/Topological_game
Inmathematicsandcomputer science,Horner's method(orHorner's scheme) is an algorithm forpolynomial evaluation. Although named afterWilliam George Horner, this method is much older, as it has been attributed toJoseph-Louis Lagrangeby Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians.[1]After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials. The algorithm is based onHorner's rule, in which a polynomial is written innested form:a0+a1x+a2x2+a3x3+⋯+anxn=a0+x(a1+x(a2+x(a3+⋯+x(an−1+xan)⋯))).{\displaystyle {\begin{aligned}&a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\={}&a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}} This allows the evaluation of apolynomialof degreenwith onlyn{\displaystyle n}multiplications andn{\displaystyle n}additions. This is optimal, since there are polynomials of degreenthat cannot be evaluated with fewer arithmetic operations.[2] Alternatively,Horner's methodandHorner–Ruffini methodalso refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of theNewton–Raphson methodmade more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970. Given the polynomialp(x)=∑i=0naixi=a0+a1x+a2x2+a3x3+⋯+anxn,{\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},}wherea0,…,an{\displaystyle a_{0},\ldots ,a_{n}}are constant coefficients, the problem is to evaluate the polynomial at a specific valuex0{\displaystyle x_{0}}ofx.{\displaystyle x.} For this, a new sequence of constants is definedrecursivelyas follows: Thenb0{\displaystyle b_{0}}is the value ofp(x0){\displaystyle p(x_{0})}. To see why this works, the polynomial can be written in the formp(x)=a0+x(a1+x(a2+x(a3+⋯+x(an−1+xan)⋯))).{\displaystyle p(x)=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}\ .} Thus, by iteratively substituting thebi{\displaystyle b_{i}}into the expression,p(x0)=a0+x0(a1+x0(a2+⋯+x0(an−1+bnx0)⋯))=a0+x0(a1+x0(a2+⋯+x0bn−1))⋮=a0+x0b1=b0.{\displaystyle {\begin{aligned}p(x_{0})&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}(a_{n-1}+b_{n}x_{0})\cdots {\big )}{\Big )}\\&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}b_{n-1}{\big )}{\Big )}\\&~~\vdots \\&=a_{0}+x_{0}b_{1}\\&=b_{0}.\end{aligned}}} Now, it can be proven that; This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of;p(x)/(x−x0){\displaystyle p(x)/(x-x_{0})}withb0{\displaystyle b_{0}}(which is equal top(x0){\displaystyle p(x_{0})}) being the division's remainder, as is demonstrated by the examples below. Ifx0{\displaystyle x_{0}}is a root ofp(x){\displaystyle p(x)}, thenb0=0{\displaystyle b_{0}=0}(meaning the remainder is0{\displaystyle 0}), which means you can factorp(x){\displaystyle p(x)}asx−x0{\displaystyle x-x_{0}}. To finding the consecutiveb{\displaystyle b}-values, you start with determiningbn{\displaystyle b_{n}}, which is simply equal toan{\displaystyle a_{n}}. Then you then work recursively using the formula:bn−1=an−1+bnx0{\displaystyle b_{n-1}=a_{n-1}+b_{n}x_{0}}till you arrive atb0{\displaystyle b_{0}}. Evaluatef(x)=2x3−6x2+2x−1{\displaystyle f(x)=2x^{3}-6x^{2}+2x-1}forx=3{\displaystyle x=3}. We usesynthetic divisionas follows: The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of thex-value (3in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder off(x){\displaystyle f(x)}on division byx−3{\displaystyle x-3}is5. But by thepolynomial remainder theorem, we know that the remainder isf(3){\displaystyle f(3)}. Thus,f(3)=5{\displaystyle f(3)=5}. In this example, ifa3=2,a2=−6,a1=2,a0=−1{\displaystyle a_{3}=2,a_{2}=-6,a_{1}=2,a_{0}=-1}we can see thatb3=2,b2=0,b1=2,b0=5{\displaystyle b_{3}=2,b_{2}=0,b_{1}=2,b_{0}=5}, the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method. As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient off(x){\displaystyle f(x)}on division byx−3{\displaystyle x-3}. The remainder is5. This makes Horner's method useful forpolynomial long division. Dividex3−6x2+11x−6{\displaystyle x^{3}-6x^{2}+11x-6}byx−2{\displaystyle x-2}: The quotient isx2−4x+3{\displaystyle x^{2}-4x+3}. Letf1(x)=4x4−6x3+3x−5{\displaystyle f_{1}(x)=4x^{4}-6x^{3}+3x-5}andf2(x)=2x−1{\displaystyle f_{2}(x)=2x-1}. Dividef1(x){\displaystyle f_{1}(x)}byf2(x){\displaystyle f_{2}\,(x)}using Horner's method. The third row is the sum of the first two rows, divided by2. Each entry in the second row is the product of1with the third-row entry to the left. The answer isf1(x)f2(x)=2x3−2x2−x+1−42x−1.{\displaystyle {\frac {f_{1}(x)}{f_{2}(x)}}=2x^{3}-2x^{2}-x+1-{\frac {4}{2x-1}}.} Evaluation using the monomial form of a degreen{\displaystyle n}polynomial requires at mostn{\displaystyle n}additions and(n2+n)/2{\displaystyle (n^{2}+n)/2}multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced ton{\displaystyle n}additions and2n−1{\displaystyle 2n-1}multiplications by evaluating the powers ofx{\displaystyle x}by iteration. If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately2n{\displaystyle 2n}times the number of bits ofx{\displaystyle x}: the evaluated polynomial has approximate magnitudexn{\displaystyle x^{n}}, and one must also storexn{\displaystyle x^{n}}itself. By contrast, Horner's method requires onlyn{\displaystyle n}additions andn{\displaystyle n}multiplications, and its storage requirements are onlyn{\displaystyle n}times the number of bits ofx{\displaystyle x}. Alternatively, Horner's method can be computed withn{\displaystyle n}fused multiply–adds. Horner's method can also be extended to evaluate the firstk{\displaystyle k}derivatives of the polynomial withkn{\displaystyle kn}additions and multiplications.[3] Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations.Alexander Ostrowskiproved in 1954 that the number of additions required is minimal.[4]Victor Panproved in 1966 that the number of multiplications is minimal.[5]However, whenx{\displaystyle x}is a matrix,Horner's method is not optimal. This assumes that the polynomial is evaluated in monomial form and nopreconditioningof the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, thenfaster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree-n{\displaystyle n}polynomial can be evaluated using only⌊n/2⌋+2 multiplications andn{\displaystyle n}additions.[6] A disadvantage of Horner's rule is that all of the operations aresequentially dependent, so it is not possible to take advantage ofinstruction level parallelismon modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation. If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:p(x)=∑i=0naixi=a0+a1x+a2x2+a3x3+⋯+anxn=(a0+a2x2+a4x4+⋯)+(a1x+a3x3+a5x5+⋯)=(a0+a2x2+a4x4+⋯)+x(a1+a3x2+a5x4+⋯)=∑i=0⌊n/2⌋a2ix2i+x∑i=0⌊n/2⌋a2i+1x2i=p0(x2)+xp1(x2).{\displaystyle {\begin{aligned}p(x)&=\sum _{i=0}^{n}a_{i}x^{i}\\[1ex]&=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+\left(a_{1}x+a_{3}x^{3}+a_{5}x^{5}+\cdots \right)\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+x\left(a_{1}+a_{3}x^{2}+a_{5}x^{4}+\cdots \right)\\[1ex]&=\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i}x^{2i}+x\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i+1}x^{2i}\\[1ex]&=p_{0}(x^{2})+xp_{1}(x^{2}).\end{aligned}}} More generally, the summation can be broken intokparts:p(x)=∑i=0naixi=∑j=0k−1xj∑i=0⌊n/k⌋aki+jxki=∑j=0k−1xjpj(xk){\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=\sum _{j=0}^{k-1}x^{j}\sum _{i=0}^{\lfloor n/k\rfloor }a_{ki+j}x^{ki}=\sum _{j=0}^{k-1}x^{j}p_{j}(x^{k})}where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allowsk-waySIMDexecution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although forfloating-pointcalculations this requires enabling (unsafe) reassociative math[citation needed]. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage ofinstruction-level parallelism. Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on amicrocontrollerwith nohardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation)ai=1{\displaystyle a_{i}=1}, andx=2{\displaystyle x=2}. Then,x(orxto some power) is repeatedly factored out. In thisbinary numeral system(base 2),x=2{\displaystyle x=2}, so powers of 2 are repeatedly factored out. For example, to find the product of two numbers (0.15625) andm:(0.15625)m=(0.00101b)m=(2−3+2−5)m=(2−3)m+(2−5)m=2−3(m+(2−2)m)=2−3(m+2−2(m)).{\displaystyle {\begin{aligned}(0.15625)m&=(0.00101_{b})m=\left(2^{-3}+2^{-5}\right)m=\left(2^{-3})m+(2^{-5}\right)m\\&=2^{-3}\left(m+\left(2^{-2}\right)m\right)=2^{-3}\left(m+2^{-2}(m)\right).\end{aligned}}} To find the product of two binary numbersdandm: In general, for a binary number with bit values (d3d2d1d0{\displaystyle d_{3}d_{2}d_{1}d_{0}}) the product is(d323+d222+d121+d020)m=d323m+d222m+d121m+d020m.{\displaystyle (d_{3}2^{3}+d_{2}2^{2}+d_{1}2^{1}+d_{0}2^{0})m=d_{3}2^{3}m+d_{2}2^{2}m+d_{1}2^{1}m+d_{0}2^{0}m.}At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication ordivision by zerois not an issue, despite this implication in the factored equation:=d0(m+2d1d0(m+2d2d1(m+2d3d2(m)))).{\displaystyle =d_{0}\left(m+2{\frac {d_{1}}{d_{0}}}\left(m+2{\frac {d_{2}}{d_{1}}}\left(m+2{\frac {d_{3}}{d_{2}}}(m)\right)\right)\right).} The denominators all equal one (or the term is absent), so this reduces to=d0(m+2d1(m+2d2(m+2d3(m)))),{\displaystyle =d_{0}(m+2{d_{1}}(m+2{d_{2}}(m+2{d_{3}}(m)))),}or equivalently (as consistent with the "method" described above)=d3(m+2−1d2(m+2−1d1(m+d0(m)))).{\displaystyle =d_{3}(m+2^{-1}{d_{2}}(m+2^{-1}{d_{1}}(m+{d_{0}}(m)))).} In binary (base-2) math, multiplication by a power of 2 is merely aregister shiftoperation. Thus, multiplying by 2 is calculated in base-2 by anarithmetic shift. The factor (2−1) is a rightarithmetic shift, a (0) results in no operation (since 20= 1 is the multiplicativeidentity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction. The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.[7] Horner's method can be used to convert between different positionalnumeral systems– in which casexis the base of the number system, and theaicoefficients are the digits of the base-xrepresentation of a given number – and can also be used ifxis amatrix, in which case the gain in computational efficiency is even greater. However, for such casesfaster methodsare known.[8] Using the long division algorithm in combination withNewton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomialpn(x){\displaystyle p_{n}(x)}of degreen{\displaystyle n}with zeroszn<zn−1<⋯<z1,{\displaystyle z_{n}<z_{n-1}<\cdots <z_{1},}make some initial guessx0{\displaystyle x_{0}}such thatz1<x0{\displaystyle z_{1}<x_{0}}. Now iterate the following two steps: These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.[9] Consider the polynomialp6(x)=(x+8)(x+5)(x+3)(x−2)(x−3)(x−7){\displaystyle p_{6}(x)=(x+8)(x+5)(x+3)(x-2)(x-3)(x-7)}which can be expanded top6(x)=x6+4x5−72x4−214x3+1127x2+1602x−5040.{\displaystyle p_{6}(x)=x^{6}+4x^{5}-72x^{4}-214x^{3}+1127x^{2}+1602x-5040.} From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Nextp(x){\displaystyle p(x)}is divided by(x−7){\displaystyle (x-7)}to obtainp5(x)=x5+11x4+5x3−179x2−126x+720{\displaystyle p_{5}(x)=x^{5}+11x^{4}+5x^{3}-179x^{2}-126x+720}which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by(x−3){\displaystyle (x-3)}to obtainp4(x)=x4+14x3+47x2−38x−240{\displaystyle p_{4}(x)=x^{4}+14x^{3}+47x^{2}-38x-240}which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtainp3(x)=x3+16x2+79x+120{\displaystyle p_{3}(x)=x^{3}+16x^{2}+79x+120}which is shown in green and found to have a zero at −3. This polynomial is further reduced top2(x)=x2+13x+40{\displaystyle p_{2}(x)=x^{2}+13x+40}which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducingp2(x){\displaystyle p_{2}(x)}and solving thelinear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found. Horner's method can be modified to compute the divided difference(p(y)−p(x))/(y−x).{\displaystyle (p(y)-p(x))/(y-x).}Given the polynomial (as before)p(x)=∑i=0naixi=a0+a1x+a2x2+a3x3+⋯+anxn,{\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},}proceed as follows[10]bn=an,dn=bn,bn−1=an−1+bnx,dn−1=bn−1+dny,⋮⋮b1=a1+b2x,d1=b1+d2y,b0=a0+b1x.{\displaystyle {\begin{aligned}b_{n}&=a_{n},&\quad d_{n}&=b_{n},\\b_{n-1}&=a_{n-1}+b_{n}x,&\quad d_{n-1}&=b_{n-1}+d_{n}y,\\&{}\ \ \vdots &\quad &{}\ \ \vdots \\b_{1}&=a_{1}+b_{2}x,&\quad d_{1}&=b_{1}+d_{2}y,\\b_{0}&=a_{0}+b_{1}x.\end{aligned}}} At completion, we havep(x)=b0,p(y)−p(x)y−x=d1,p(y)=b0+(y−x)d1.{\displaystyle {\begin{aligned}p(x)&=b_{0},\\{\frac {p(y)-p(x)}{y-x}}&=d_{1},\\p(y)&=b_{0}+(y-x)d_{1}.\end{aligned}}}This computation of the divided difference is subject to less round-off error than evaluatingp(x){\displaystyle p(x)}andp(y){\displaystyle p(y)}separately, particularly whenx≈y{\displaystyle x\approx y}. Substitutingy=x{\displaystyle y=x}in this method givesd1=p′(x){\displaystyle d_{1}=p'(x)}, the derivative ofp(x){\displaystyle p(x)}. Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation",[12]wasreadbefore the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823.[12]Horner's paper in Part II ofPhilosophical Transactions of the Royal Society of Londonfor 1819 was warmly and expansively welcomed by areviewer[permanent dead link]in the issue ofThe Monthly Review: or, Literary Journalfor April, 1820; in comparison, a technical paper byCharles Babbageis dismissed curtly in this review. The sequence of reviews inThe Monthly Reviewfor September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller[13]showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820). Unlike his English contemporaries, Horner drew on the Continental literature, notably the work ofArbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work ofPaolo Ruffini. Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to: Qin Jiushao, in hisShu Shu Jiu Zhang(Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematicianJia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies.Yoshio MikamiinDevelopment of Mathematics in China and Japan(Leipzig 1913) wrote: "... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way."[20] Ulrich Libbrechtconcluded:It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese.[21]The extraction of square and cube roots along similar lines is already discussed byLiu Huiin connection with Problems IV.16 and 22 inJiu Zhang Suan Shu, whileWang Xiaotongin the 7th century supposes his readers can solve cubics by an approximation method described in his bookJigu Suanjing.
https://en.wikipedia.org/wiki/Horner_scheme
Incomputational mathematics, aword problemis theproblem of decidingwhether two given expressions are equivalent with respect to a set ofrewritingidentities. A prototypical example is theword problem for groups, but there are many other instances as well. Somedeep resultsof computational theory concern theundecidablityof this question in many important cases.[1] Incomputer algebraone often wishes to encode mathematical expressions using an expression tree. But there are often multiple equivalent expression trees. The question naturally arises of whether there is an algorithm which, given as input two expressions, decides whether they represent the same element. Such an algorithm is called asolution to the word problem. For example, imagine thatx,y,z{\displaystyle x,y,z}are symbols representingreal numbers- then a relevant solution to the word problem would, given the input(x⋅y)/z=?(x/z)⋅y{\displaystyle (x\cdot y)/z\mathrel {\overset {?}{=}} (x/z)\cdot y}, produce the outputEQUAL, and similarly produceNOT_EQUALfrom(x⋅y)/z=?(x/x)⋅y{\displaystyle (x\cdot y)/z\mathrel {\overset {?}{=}} (x/x)\cdot y}. The most direct solution to a word problem takes the form of a normal form theorem and algorithm which maps every element in anequivalence classof expressions to a single encoding known as thenormal form- the word problem is then solved by comparing these normal forms viasyntactic equality.[1]For example one might decide thatx⋅y⋅z−1{\displaystyle x\cdot y\cdot z^{-1}}is the normal form of(x⋅y)/z{\displaystyle (x\cdot y)/z},(x/z)⋅y{\displaystyle (x/z)\cdot y}, and(y/z)⋅x{\displaystyle (y/z)\cdot x}, and devise a transformation system to rewrite those expressions to that form, in the process proving that all equivalent expressions will be rewritten to the same normal form.[2]But not all solutions to the word problem use a normal form theorem - there are algebraic properties which indirectly imply the existence of an algorithm.[1] While the word problem asks whether two terms containingconstantsare equal, a proper extension of the word problem known as theunification problemasks whether two termst1,t2{\displaystyle t_{1},t_{2}}containingvariableshaveinstancesthat are equal, or in other words whether the equationt1=t2{\displaystyle t_{1}=t_{2}}has any solutions. As a common example,2+3=?8+(−3){\displaystyle 2+3\mathrel {\overset {?}{=}} 8+(-3)}is a word problem in theinteger groupZ{\displaystyle \mathbb {Z} }, while2+x=?8+(−x){\displaystyle 2+x\mathrel {\overset {?}{=}} 8+(-x)}is a unification problem in the same group; since the former terms happen to be equal inZ{\displaystyle \mathbb {Z} }, the latter problem has thesubstitution{x↦3}{\displaystyle \{x\mapsto 3\}}as a solution. One of the most deeply studied cases of the word problem is in the theory ofsemigroupsandgroups. A timeline of papers relevant to theNovikov-Boone theoremis as follows:[3][4] The accessibility problem forstring rewriting systems(semi-Thue systems or semigroups) can be stated as follows: Given a semi-Thue systemT:=(Σ,R){\displaystyle T:=(\Sigma ,R)}and two words (strings)u,v∈Σ∗{\displaystyle u,v\in \Sigma ^{*}}, canu{\displaystyle u}be transformed intov{\displaystyle v}by applying rules fromR{\displaystyle R}? Note that the rewriting here is one-way. The word problem is the accessibility problem for symmetric rewrite relations, i.e. Thue systems.[27] The accessibility and word problems areundecidable, i.e. there is no general algorithm for solving this problem.[28]This even holds if we limit the systems to have finite presentations, i.e. a finite set of symbols and a finite set of relations on those symbols.[27]Even the word problem restricted toground termsis not decidable for certain finitely presented semigroups.[29][30] Given apresentation⟨S∣R⟩{\displaystyle \langle S\mid {\mathcal {R}}\rangle }for a groupG, the word problem is the algorithmic problem of deciding, given as input two words inS, whether they represent the same element ofG. The word problem is one of three algorithmic problems for groups proposed byMax Dehnin 1911. It was shown byPyotr Novikovin 1955 that there exists a finitely presented groupGsuch that the word problem forGisundecidable.[31] One of the earliest proofs that a word problem is undecidable was forcombinatory logic: when are two strings of combinators equivalent? Because combinators encode all possibleTuring machines, and the equivalence of two Turing machines is undecidable, it follows that the equivalence of two strings of combinators is undecidable.Alonzo Churchobserved this in 1936.[32] Likewise, one has essentially the same problem in (untyped)lambda calculus: given two distinct lambda expressions, there is no algorithm which can discern whether they are equivalent or not;equivalence is undecidable. For several typed variants of the lambda calculus, equivalence is decidable by comparison of normal forms. The word problem for anabstract rewriting system(ARS) is quite succinct: given objectsxandyare they equivalent under↔∗{\displaystyle {\stackrel {*}{\leftrightarrow }}}?[29]The word problem for an ARS isundecidablein general. However, there is acomputablesolution for the word problem in the specific case where every object reduces to a unique normal form in a finite number of steps (i.e. the system isconvergent): two objects are equivalent under↔∗{\displaystyle {\stackrel {*}{\leftrightarrow }}}if and only if they reduce to the same normal form.[33]TheKnuth-Bendix completion algorithmcan be used to transform a set of equations into a convergentterm rewriting system. Inuniversal algebraone studiesalgebraic structuresconsisting of agenerating setA, a collection ofoperationsonAof finite arity, and a finite set of identities that these operations must satisfy. The word problem for an algebra is then to determine, given two expressions (words) involving the generators and operations, whether they represent the same element of the algebra modulo the identities. The word problems for groups and semigroups can be phrased as word problems for algebras.[1] The word problem on freeHeyting algebrasis difficult.[34]The only known results are that the free Heyting algebra on one generator is infinite, and that the freecomplete Heyting algebraon one generator exists (and has one more element than the free Heyting algebra). The word problem onfree latticesand more generally freebounded latticeshas a decidable solution. Bounded lattices are algebraic structures with the twobinary operations∨ and ∧ and the two constants (nullary operations) 0 and 1. The set of all well-formedexpressionsthat can be formulated using these operations on elements from a given set of generatorsXwill be calledW(X). This set of words contains many expressions that turn out to denote equal values in every lattice. For example, ifais some element ofX, thena∨ 1 = 1 anda∧ 1 =a. The word problem for free bounded lattices is the problem of determining which of these elements ofW(X) denote the same element in the free bounded latticeFX, and hence in every bounded lattice. The word problem may be resolved as follows. A relation ≤~onW(X) may be definedinductivelyby settingw≤~vif and only ifone of the following holds: This defines apreorder≤~onW(X), so anequivalence relationcan be defined byw~vwhenw≤~vandv≤~w. One may then show that thepartially orderedquotient setW(X)/~ is the free bounded latticeFX.[35][36]Theequivalence classesofW(X)/~ are the sets of all wordswandvwithw≤~vandv≤~w. Two well-formed wordsvandwinW(X) denote the same value in every bounded lattice if and only ifw≤~vandv≤~w; the latter conditions can be effectively decided using the above inductive definition. The table shows an example computation to show that the wordsx∧zandx∧z∧(x∨y) denote the same value in every bounded lattice. The case of lattices that are not bounded is treated similarly, omitting rules 2 and 3 in the above construction of ≤~. Bläsius and Bürckert[37]demonstrate theKnuth–Bendix algorithmon an axiom set for groups. The algorithm yields aconfluentandnoetherianterm rewrite systemthat transforms every term into a uniquenormal form.[38]The rewrite rules are numbered incontiguous since some rules became redundant and were deleted during the algorithm run. The equality of two terms follows from the axioms if and only if both terms are transformed into literally the same normal form term. For example, the terms share the same normal form, viz.1{\displaystyle 1}; therefore both terms are equal in every group. As another example, the term1⋅(a⋅b){\displaystyle 1\cdot (a\cdot b)}andb⋅(1⋅a){\displaystyle b\cdot (1\cdot a)}has the normal forma⋅b{\displaystyle a\cdot b}andb⋅a{\displaystyle b\cdot a}, respectively. Since the normal forms are literally different, the original terms cannot be equal in every group. In fact, they are usually different innon-abelian groups.
https://en.wikipedia.org/wiki/Word_problem_(mathematics)
Abinary-safefunction is one that treats its input as a raw stream of bytes and ignores every textual aspect it may have. The term is mainly used in thePHPprogramming language to describe expected behaviour when passing binary data intofunctionswhose main responsibility is text andstringmanipulating, and is used widely in the official PHP documentation.[1] While all textual data can be represented in binary-form, it must be done so throughcharacter encoding. In addition to this, hownewlinesare represented may vary depending on the platform used. Windows, Linux and macOS all represent newlines differently in binary form. This means that reading a file as binary data, parsing it as text and then writing it back to disk (thus reconverting it back to binary form) may result in a different binary representation than the one originally used. Most programming languages let the programmer decide whether to parse the contents of a file as text, or read it as binary data. To convey this intent, special flags or different functions exist when reading or writing files to disk. For example, in the PHP, C, and C++ programming languages, developers have to usefopen($filename, "rb")instead offopen($filename, "r")to read the file as a binary stream instead of interpreting the textual data as such. This may also be referred to as reading in 'binary safe' mode. Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Binary-safe
Some branches ofeconomicsandgame theorydeal withindivisible goods, discrete items that can be traded only as a whole. For example, in combinatorial auctions there is a finite set of items, and every agent can buy a subset of the items, but an item cannot be divided among two or more agents. It is usually assumed that every agent assigns subjectiveutilityto every subset of the items. This can be represented in one of two ways: A cardinal utility function implies a preference relation:u(A)>u(B){\displaystyle u(A)>u(B)}impliesA≻B{\displaystyle A\succ B}andu(A)≥u(B){\displaystyle u(A)\geq u(B)}impliesA⪰B{\displaystyle A\succeq B}. Utility functions can have several properties.[1] Monotonicitymeans that an agent always (weakly) prefers to have extra items. Formally: Monotonicity is equivalent to thefree disposalassumption: if an agent may always discard unwanted items, then extra items can never decrease the utility. Additivity (also calledlinearityormodularity) means that "the whole is equal to the sum of its parts." That is, the utility of a set of items is the sum of the utilities of each item separately. This property is relevant only for cardinal utility functions. It says that for every setA{\displaystyle A}of items, assuming thatu(∅)=0{\displaystyle u(\emptyset )=0}. In other words,u{\displaystyle u}is anadditive function. An equivalent definition is: for any sets of itemsA{\displaystyle A}andB{\displaystyle B}, An additive utility function is characteristic ofindependent goods. For example, an apple and a hat are considered independent: the utility a person receives from having an apple is the same whether or not he has a hat, and vice versa. A typical utility function for this case is given at the right. Submodularitymeans that "the whole is not more than the sum of its parts (and may be less)." Formally, for all setsA{\displaystyle A}andB{\displaystyle B}, In other words,u{\displaystyle u}is asubmodular set function. An equivalent property isdiminishing marginal utility, which means that for any setsA{\displaystyle A}andB{\displaystyle B}withA⊆B{\displaystyle A\subseteq B}, and everyx∉B{\displaystyle x\notin B}:[2] A submodular utility function is characteristic ofsubstitute goods. For example, an apple and a bread loaf can be considered substitutes: the utility a person receives from eating an apple is smaller if he has already ate bread (and vice versa), since he is less hungry in that case. A typical utility function for this case is given at the right. Supermodularityis the opposite of submodularity: it means that "the whole is not less than the sum of its parts (and may be more)". Formally, for all setsA{\displaystyle A}andB{\displaystyle B}, In other words,u{\displaystyle u}is asupermodular set function. An equivalent property isincreasing marginal utility, which means that for all setsA{\displaystyle A}andB{\displaystyle B}withA⊆B{\displaystyle A\subseteq B}, and everyx∉B{\displaystyle x\notin B}: A supermoduler utility function is characteristic ofcomplementary goods. For example, an apple and a knife can be considered complementary: the utility a person receives from an apple is larger if he already has a knife (and vice versa), since it is easier to eat an apple after cutting it with a knife. A possible utility function for this case is given at the right. A utility function isadditiveif and only if it is both submodular and supermodular. Subadditivitymeans that for every pair of disjoint setsA,B{\displaystyle A,B} In other words,u{\displaystyle u}is asubadditive set function. Assumingu(∅){\displaystyle u(\emptyset )}is non-negative, every submodular function is subadditive. However, there are non-negative subadditive functions that are not submodular. For example, assume that there are 3 identical items,X,Y{\displaystyle X,Y}, and Z, and the utility depends only on their quantity. The table on the right describes a utility function that is subadditive but not submodular, since Superadditivitymeans that for every pair of disjoint setsA,B{\displaystyle A,B} In other words,u{\displaystyle u}is asuperadditive set function. Assumingu(∅){\displaystyle u(\emptyset )}is non-positive, every supermodular function is superadditive. However, there are non-negative superadditive functions that are not supermodular. For example, assume that there are 3 identical items,X,Y{\displaystyle X,Y}, and Z, and the utility depends only on their quantity. The table on the right describes a utility function that is non-negative and superadditive but not supermodular, since A utility function withu(∅)=0{\displaystyle u(\emptyset )=0}is said to beadditiveif and only if it is both superadditive and subadditive. With the typical assumption thatu(∅)=0{\displaystyle u(\emptyset )=0}, every submodular function is subadditive and every supermodular function is superadditive. Without any assumption on the utility from the empty set, these relations do not hold. In particular, if a submodular function is not subadditive, thenu(∅){\displaystyle u(\emptyset )}must be negative. For example, suppose there are two items,X,Y{\displaystyle X,Y}, withu(∅)=−1{\displaystyle u(\emptyset )=-1},u({X})=u({Y})=1{\displaystyle u(\{X\})=u(\{Y\})=1}andu({X,Y})=3{\displaystyle u(\{X,Y\})=3}. This utility function is submodular and supermodular and non-negative except on the empty set, but is not subadditive, since Also, if a supermodular function is not superadditive, thenu(∅){\displaystyle u(\emptyset )}must be positive. Suppose instead thatu(∅)=u({X})=u({Y})=u({X,Y})=1{\displaystyle u(\emptyset )=u(\{X\})=u(\{Y\})=u(\{X,Y\})=1}. This utility function is non-negative, supermodular, and submodular, but is not superadditive, since Unit demand (UD) means that the agent only wants a single good. If the agent gets two or more goods, he uses the one of them that gives him the highest utility, and discards the rest. Formally: A unit-demand function is an extreme case of a submodular function. It is characteristic of goods that are pure substitutes. For example, if there are an apple and a pear, and an agent wants to eat a single fruit, then his utility function is unit-demand, as exemplified in the table at the right. Gross substitutes (GS) means that the agents regards the items assubstitute goodsorindependent goodsbut notcomplementary goods. There are many formal definitions to this property, all of which are equivalent. SeeGross substitutes (indivisible items)for more details. Hence the following relations hold between the classes: See diagram on the right. A utility function describes the happiness of an individual. Often, we need a function that describes the happiness of an entire society. Such a function is called asocial welfare function, and it is usually anaggregate functionof two or more utility functions. If the individual utility functions areadditive, then the following is true for the aggregate functions:
https://en.wikipedia.org/wiki/Utility_functions_on_indivisible_goods#Aggregates_of_utility_functions
Instatistical classification, two main approaches are called thegenerativeapproach and thediscriminativeapproach. These computeclassifiersby different approaches, differing in the degree ofstatistical modelling. Terminology is inconsistent,[a]but three major types can be distinguished:[1] The distinction between these last two classes is not consistently made;[5]Jebara (2004)refers to these three classes asgenerative learning,conditional learning, anddiscriminative learning, butNg & Jordan (2002)only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes.[6]Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model. Standard examples of each, all of which arelinear classifiers, are: In application to classification, one wishes to go from an observationxto a labely(or probability distribution on labels). One can compute this directly, without using a probability distribution (distribution-free classifier); one can estimate the probability of a label given an observation,P(Y|X=x){\displaystyle P(Y|X=x)}(discriminative model), and base classification on that; or one can estimate the joint distributionP(X,Y){\displaystyle P(X,Y)}(generative model), from that compute the conditional probabilityP(Y|X=x){\displaystyle P(Y|X=x)}, and then base classification on that. These are increasingly indirect, but increasingly probabilistic, allowing moredomain knowledgeand probability theory to be applied. In practice different approaches are used, depending on the particular problem, and hybrids can combine strengths of multiple approaches. An alternative division defines these symmetrically as: Regardless of precise definition, the terminology is constitutional because a generative model can be used to "generate" random instances (outcomes), either of an observation and target(x,y){\displaystyle (x,y)}, or of an observationxgiven a target valuey,[3]while a discriminative model or discriminative classifier (without a model) can be used to "discriminate" the value of the target variableY, given an observationx.[4]The difference between "discriminate" (distinguish) and "classify" is subtle, and these are not consistently distinguished. (The term "discriminative classifier" becomes apleonasmwhen "discrimination" is equivalent to "classification".) The term "generative model" is also used to describe models that generate instances of output variables in a way that has no clear relationship to probability distributions over potential samples of input variables.Generative adversarial networksare examples of this class of generative models, and are judged primarily by the similarity of particular outputs to potential inputs. Such models are not classifiers. In application to classification, the observableXis frequently acontinuous variable, the targetYis generally adiscrete variableconsisting of a finite set of labels, and the conditional probabilityP(Y∣X){\displaystyle P(Y\mid X)}can also be interpreted as a (non-deterministic)target functionf:X→Y{\displaystyle f\colon X\to Y}, consideringXas inputs andYas outputs. Given a finite set of labels, the two definitions of "generative model" are closely related. A model of the conditional distributionP(X∣Y=y){\displaystyle P(X\mid Y=y)}is a model of the distribution of each label, and a model of the joint distribution is equivalent to a model of the distribution of label valuesP(Y){\displaystyle P(Y)}, together with the distribution of observations given a label,P(X∣Y){\displaystyle P(X\mid Y)}; symbolically,P(X,Y)=P(X∣Y)P(Y).{\displaystyle P(X,Y)=P(X\mid Y)P(Y).}Thus, while a model of the joint probability distribution is more informative than a model of the distribution of label (but without their relative frequencies), it is a relatively small step, hence these are not always distinguished. Given a model of the joint distribution,P(X,Y){\displaystyle P(X,Y)}, the distribution of the individual variables can be computed as themarginal distributionsP(X)=∑yP(X,Y=y){\displaystyle P(X)=\sum _{y}P(X,Y=y)}andP(Y)=∫xP(Y,X=x){\displaystyle P(Y)=\int _{x}P(Y,X=x)}(consideringXas continuous, hence integrating over it, andYas discrete, hence summing over it), and either conditional distribution can be computed from the definition ofconditional probability:P(X∣Y)=P(X,Y)/P(Y){\displaystyle P(X\mid Y)=P(X,Y)/P(Y)}andP(Y∣X)=P(X,Y)/P(X){\displaystyle P(Y\mid X)=P(X,Y)/P(X)}. Given a model of one conditional probability, and estimatedprobability distributionsfor the variablesXandY, denotedP(X){\displaystyle P(X)}andP(Y){\displaystyle P(Y)}, one can estimate the opposite conditional probability usingBayes' rule: For example, given a generative model forP(X∣Y){\displaystyle P(X\mid Y)}, one can estimate: and given a discriminative model forP(Y∣X){\displaystyle P(Y\mid X)}, one can estimate: Note that Bayes' rule (computing one conditional probability in terms of the other) and the definition of conditional probability (computing conditional probability in terms of the joint distribution) are frequently conflated as well. A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. So, discriminative algorithms try to learnp(y|x){\displaystyle p(y|x)}directly from the data and then try to classify data. On the other hand, generative algorithms try to learnp(x,y){\displaystyle p(x,y)}which can be transformed intop(y|x){\displaystyle p(y|x)}later to classify the data. One of the advantages of generative algorithms is that you can usep(x,y){\displaystyle p(x,y)}to generate new data similar to existing data. On the other hand, it has been proved that some discriminative algorithms give better performance than some generative algorithms in classification tasks.[7] Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. But in general, they don't necessarily perform better than generative models atclassificationandregressiontasks. The two classes are seen as complementary or as different views of the same procedure.[8] With the rise ofdeep learning, a new family of methods, called deep generative models (DGMs),[9][10]is formed through the combination of generative models and deep neural networks. An increase in the scale of the neural networks is typically accompanied by an increase in the scale of the training data, both of which are required for good performance.[11] Popular DGMs includevariational autoencoders(VAEs),generative adversarial networks(GANs), and auto-regressive models. Recently, there has been a trend to build very large deep generative models.[9]For example,GPT-3, and its precursorGPT-2,[12]are auto-regressive neural language models that contain billions of parameters, BigGAN[13]and VQ-VAE[14]which are used for image generation that can have hundreds of millions of parameters, and Jukebox is a very large generative model for musical audio that contains billions of parameters.[15] Types of generative models are: If the observed data are truly sampled from the generative model, then fitting the parameters of the generative model tomaximize the data likelihoodis a common method. However, since most statistical models are only approximations to thetruedistribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it can be more accurate to model the conditional density functions directly using adiscriminative model(see below), although application-specific details will ultimately dictate which approach is most suitable in any particular case. Suppose the input data isx∈{1,2}{\displaystyle x\in \{1,2\}}, the set of labels forx{\displaystyle x}isy∈{0,1}{\displaystyle y\in \{0,1\}}, and there are the following 4 data points:(x,y)={(1,0),(1,1),(2,0),(2,1)}{\displaystyle (x,y)=\{(1,0),(1,1),(2,0),(2,1)\}} For the above data, estimating the joint probability distributionp(x,y){\displaystyle p(x,y)}from theempirical measurewill be the following: whilep(y|x){\displaystyle p(y|x)}will be following: Shannon (1948)gives an example in which a table of frequencies of English word pairs is used to generate a sentence beginning with "representing and speedily is an good"; which is not proper English but which will increasingly approximate it as the table is moved from word pairs to word triplets etc.
https://en.wikipedia.org/wiki/Generative_model
Discriminative models, also referred to asconditional models, are a class of models frequently used forclassification. They are typically used to solvebinary classificationproblems, i.e. assign labels, such as pass/fail, win/lose, alive/dead or healthy/sick, to existing datapoints. Types of discriminative models includelogistic regression(LR),conditional random fields(CRFs),decision treesamong many others.Generative modelapproaches which uses a joint probability distribution instead, includenaive Bayes classifiers,Gaussian mixture models,variational autoencoders,generative adversarial networksand others. Unlike generative modelling, which studies thejoint probabilityP(x,y){\displaystyle P(x,y)}, discriminative modeling studies theP(y|x){\displaystyle P(y|x)}or maps the given unobserved variable (target)x{\displaystyle x}to a class labely{\displaystyle y}dependent on the observed variables (training samples). For example, inobject recognition,x{\displaystyle x}is likely to be a vector of raw pixels (or features extracted from the raw pixels of the image). Within a probabilistic framework, this is done by modeling theconditional probability distributionP(y|x){\displaystyle P(y|x)}, which can be used for predictingy{\displaystyle y}fromx{\displaystyle x}. Note that there is still distinction between the conditional model and the discriminative model, though more often they are simply categorised as discriminative model. Aconditional modelmodels the conditionalprobability distribution, while the traditional discriminative model aims to optimize on mapping the input around the most similar trained samples.[1] The following approach is based on the assumption that it is given the training data-setD={(xi;yi)|i≤N∈Z}{\displaystyle D=\{(x_{i};y_{i})|i\leq N\in \mathbb {Z} \}}, whereyi{\displaystyle y_{i}}is the corresponding output for the inputxi{\displaystyle x_{i}}.[2] We intend to use the functionf(x){\displaystyle f(x)}to simulate the behavior of what we observed from the training data-set by thelinear classifiermethod. Using the joint feature vectorϕ(x,y){\displaystyle \phi (x,y)}, the decision function is defined as: According to Memisevic's interpretation,[2]wTϕ(x,y){\displaystyle w^{T}\phi (x,y)}, which is alsoc(x,y;w){\displaystyle c(x,y;w)}, computes a score which measures the compatibility of the inputx{\displaystyle x}with the potential outputy{\displaystyle y}. Then thearg⁡max{\displaystyle \arg \max }determines the class with the highest score. Since the0-1 loss functionis a commonly used one in the decision theory, the conditionalprobability distributionP(y|x;w){\displaystyle P(y|x;w)}, wherew{\displaystyle w}is a parameter vector for optimizing the training data, could be reconsidered as following for the logistics regression model: The equation above representslogistic regression. Notice that a major distinction between models is their way of introducing posterior probability. Posterior probability is inferred from the parametric model. We then can maximize the parameter by following equation: It could also be replaced by thelog-lossequation below: Since thelog-lossis differentiable, a gradient-based method can be used to optimize the model. A global optimum is guaranteed because the objective function is convex. The gradient of log likelihood is represented by: whereEp(y|xi;w){\displaystyle E_{p(y|x^{i};w)}}is the expectation ofp(y|xi;w){\displaystyle p(y|x^{i};w)}. The above method will provide efficient computation for the relative small number of classification. Let's say we are given them{\displaystyle m}class labels (classification) andn{\displaystyle n}feature variables,Y:{y1,y2,…,ym},X:{x1,x2,…,xn}{\displaystyle Y:\{y_{1},y_{2},\ldots ,y_{m}\},X:\{x_{1},x_{2},\ldots ,x_{n}\}}, as the training samples. A generative model takes the joint probabilityP(x,y){\displaystyle P(x,y)}, wherex{\displaystyle x}is the input andy{\displaystyle y}is the label, and predicts the most possible known labely~∈Y{\displaystyle {\widetilde {y}}\in Y}for the unknown variablex~{\displaystyle {\widetilde {x}}}usingBayes' theorem.[3] Discriminative models, as opposed togenerative models, do not allow one to generate samples from thejoint distributionof observed and target variables. However, for tasks such asclassificationandregressionthat do not require the joint distribution, discriminative models can yield superior performance (in part because they have fewer variables to compute).[4][5][3]On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks. In addition, most discriminative models are inherentlysupervisedand cannot easily supportunsupervised learning. Application-specific details ultimately dictate the suitability of selecting a discriminative versus generative model. Discriminative models and generative models also differ in introducing theposterior possibility.[6]To maintain the least expected loss, the minimization of result's misclassification should be acquired. In the discriminative model, the posterior probabilities,P(y|x){\displaystyle P(y|x)}, is inferred from a parametric model, where the parameters come from the training data. Points of estimation of the parameters are obtained from the maximization of likelihood or distribution computation over the parameters. On the other hand, considering that the generative models focus on the joint probability, the class posterior possibilityP(k){\displaystyle P(k)}is considered inBayes' theorem, which is In the repeated experiments, logistic regression and naive Bayes are applied here for different models on binary classification task, discriminative learning results in lower asymptotic errors, while generative one results in higher asymptotic errors faster.[3]However, in Ulusoy and Bishop's joint work,Comparison of Generative and Discriminative Techniques for Object Detection and Classification, they state that the above statement is true only when the model is the appropriate one for data (i.e.the data distribution is correctly modeled by the generative model). Significant advantages of using discriminative modeling are: Compared with the advantages of using generative modeling: Since both advantages and disadvantages present on the two way of modeling, combining both approaches will be a good modeling in practice. For example, in Marras' articleA Joint Discriminative Generative Model for Deformable Model Construction and Classification,[7]he and his coauthors apply the combination of two modelings on face classification of the models, and receive a higher accuracy than the traditional approach. Similarly, Kelm[8]also proposed the combination of two modelings for pixel classification in his articleCombining Generative and Discriminative Methods for Pixel Classification with Multi-Conditional Learning. During the process of extracting the discriminative features prior to the clustering,Principal component analysis(PCA), though commonly used, is not a necessarily discriminative approach. In contrast, LDA is a discriminative one.[9]Linear discriminant analysis(LDA), provides an efficient way of eliminating the disadvantage we list above. As we know, the discriminative model needs a combination of multiple subtasks before classification, and LDA provides appropriate solution towards this problem by reducing dimension. Examples of discriminative models include:
https://en.wikipedia.org/wiki/Discriminative_model
Thespace mappingmethodology for modeling and design optimization ofengineering systemswas first discovered byJohn Bandlerin 1993. It uses relevant existing knowledge to speed up model generation and designoptimizationof a system. The knowledge is updated with new validation information from the system when available. The space mapping methodology employs a "quasi-global" formulation that intelligently links companion "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models of different complexities. In engineering design, space mapping aligns a very fast coarse model with the expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment can be done either off-line (model enhancement) or on-the-fly with surrogate updates (e.g., aggressive space mapping). At the core of the process is a pair of models: one very accurate but too expensive to use directly with a conventional optimization routine, and one significantly less expensive and, accordingly, less accurate. The latter (fast model) is usually referred to as the "coarse" model (coarse space). The former (slow model) is usually referred to as the "fine" model. A validation space ("reality") represents the fine model, for example, a high-fidelity physics model. The optimization space, where conventional optimization is carried out, incorporates the coarse model (orsurrogate model), for example, the low-fidelity physics or "knowledge" model. In a space-mapping design optimization phase, there is a prediction or "execution" step, where the results of an optimized "mapped coarse model" (updated surrogate) are assigned to the fine model for validation. After the validation process, if the design specifications are not satisfied, relevant data is transferred to the optimization space ("feedback"), where the mapping-augmented coarse model or surrogate is updated (enhanced, realigned with the fine model) through an iterative optimization process termed "parameter extraction". The mapping formulation itself incorporates "intuition", part of the engineer's so-called "feel" for a problem.[1]In particular, the Aggressive Space Mapping (ASM) process displays key characteristics of cognition (an expert's approach to a problem), and is often illustrated in simple cognitive terms. FollowingJohn Bandler's concept in 1993,[1][2]algorithms have utilized Broyden updates (aggressive space mapping),[3]trust regions,[4]andartificial neural networks.[5]Developments include implicit space mapping,[6]in which we allow preassigned parameters not used in the optimization process to change in the coarse model, and output space mapping, where a transformation is applied to the response of the model. A 2004 paper reviews the state of the art after the first ten years of development and implementation.[7]Tuning space mapping[8]utilizes a so-called tuning model—constructed invasively from the fine model—as well as a calibration process that translates the adjustment of the optimized tuning model parameters into relevant updates of the design variables. The space mapping concept has been extended to neural-based space mapping forlarge-signalstatistical modelingofnonlinearmicrowavedevices.[9][10]Space mapping is supported by sound convergence theory and is related to the defect-correction approach.[11] A 2016 state-of-the-art review is devoted to aggressive space mapping.[12]It spans two decades of development and engineering applications. A comprehensive 2021 review paper[13]discusses space mapping in the context ofradio frequencyandmicrowavedesign optimization; in the context of engineeringsurrogate model, feature-based and cognition-driven design; and in the context ofmachine learning,intuition, and human intelligence. The space mapping methodology can also be used to solveinverse problems. Proven techniques include the Linear Inverse Space Mapping (LISM) algorithm,[14]as well as the Space Mapping with Inverse Difference (SM-ID) method.[15] Space mapping optimization belongs to the class of surrogate-based optimization methods,[16]that is to say, optimization methods that rely on asurrogate model. The space mapping technique has been applied in a variety of disciplines including microwave andelectromagneticdesign, civil and mechanical applications,aerospace engineering, and biomedical research. Some examples: Various simulators can be involved in a space mapping optimization and modeling processes. Three international workshops have focused significantly on the art, the science and the technology of space mapping. There is a wide spectrum of terminology associated with space mapping: ideal model, coarse model, coarse space, fine model, companion model, cheap model, expensive model,surrogate model, low fidelity (resolution) model, high fidelity (resolution) model, empirical model, simplified physics model, physics-based model, quasi-global model, physically expressive model, device under test, electromagnetics-based model,simulationmodel, computational model, tuning model, calibration model, surrogate model, surrogate update, mapped coarse model, surrogate optimization, parameter extraction, target response, optimization space, validation space, neuro-space mapping, implicit space mapping, output space mapping, port tuning, predistortion (of design specifications), manifold mapping, defect correction, model management, multi-fidelity models, variable fidelity/variable complexity,multigrid method, coarse grid, fine grid, surrogate-driven, simulation-driven, model-driven, feature-based modeling.
https://en.wikipedia.org/wiki/Space_mapping
Inmathematics,Laver tables(named afterRichard Laver, who discovered them towards the end of the 1980s in connection with his works onset theory) are tables of numbers that have certain properties ofalgebraicandcombinatorialinterest. They occur in the study ofracks and quandles. For any nonnegativeintegern, then-thLaver tableis the 2n× 2ntable whose entry in the cell at rowpand columnq(1 ≤p,q≤ 2n) is defined as[1] where⋆n{\displaystyle \star _{n}}is the uniquebinary operationon {1,...,2n} that satisfies the following two equations for allp,q: and Note: Equation (1) uses the notationxmod2n{\displaystyle x{\bmod {2}}^{n}}to mean the unique member of {1,...,2n}congruenttoxmodulo2n. Equation (2) is known as the(left) self-distributive law, and a set endowed withanybinary operation satisfying this law is called ashelf. Thus, then-th Laver table is just themultiplication tablefor the unique shelf ({1,...,2n},⋆n{\displaystyle \star _{n}}) that satisfies Equation (1). Examples: Following are the first five Laver tables,[2]i.e. the multiplication tables for the shelves ({1,...,2n},⋆n{\displaystyle \star _{n}}),n= 0, 1, 2, 3, 4: There is no knownclosed-form expressionto calculate the entries of a Laver table directly,[3]but Patrick Dehornoy provides a simplealgorithmfor filling out Laver tables.[4] Looking at just the first row in then-th Laver table, forn= 0, 1, 2, ..., the entries in each first row are seen to be periodic with a period that's always a power of two, as mentioned in Property 2 above. The first few periods are 1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, ... (sequenceA098820in theOEIS). This sequence is nondecreasing, and in 1995 Richard Laverproved,under the assumption that there exists arank-into-rank(alarge cardinalproperty), that it actually increases without bound. (It is not known whether this is also provable inZFCwithout the additional large-cardinal axiom.)[5]In any case, it grows extremely slowly; Randall Dougherty showed that 32 cannot appear in this sequence (if it ever does) untiln> A(9, A(8, A(8, 254))), where A denotes theAckermann–Péter function.[6]
https://en.wikipedia.org/wiki/Laver_table
Researcher degrees of freedomis a concept referring to the inherent flexibility involved in the process of designing and conducting ascientific experiment, and in analyzing its results. The term reflects the fact that researchers can choose between multiple ways of collecting and analyzing data, and these decisions can be made either arbitrarily or because they, unlike other possible choices, produce a positive andstatistically significantresult.[1]The researcher degrees of freedom has positives such as affording the ability to look at nature from different angles, allowing new discoveries and hypotheses to be generated.[2][3][4]However, researcher degrees of freedom can lead todata dredgingand other questionable research practices where the different interpretations and analyses are taken for granted[5][6]Their widespread use represents an inherent methodological limitation inscientific research, and contributes to an inflated rate offalse-positivefindings.[1]They can also lead to overestimatedeffect sizes.[7] Though the concept of researcher degrees of freedom has mainly been discussed in the context ofpsychology, it can affect any scientific discipline.[1][8]Likepublication bias, the existence of researcher degrees of freedom has the potential to lead to an inflated degree offunnel plotasymmetry.[9]It is also a potential explanation forp-hacking, as researchers have so many degrees of freedom to draw on, especially in the social and behavioral sciences.Multiverse analysisis a method that helps bring these degrees of freedom to light. Studies with smallersample sizesare more susceptible to the biasing influence of researcher degrees of freedom.[10] Steegen et al. (2016) showed how, starting from a single raw data set, applying different reasonable data processing decisions can give rise to a multitude of processed data sets (called the data multiverse), often leading to different statistical results.[11]Wicherts et al. (2016) provided a list of 34 degrees of freedom (DFs) researchers have when conducting psychological research. The DFs listed span every stage of the research process, from formulating ahypothesisto the reporting of results. They include conducting exploratory, hypothesis-free research, which the authors note "...pervades many of the researcher DFs that we describe below in the later phases of the study." Other DFs listed in this paper include the creation of multiple manipulatedindependent variablesand the measurement of additional variables that may be selected for analysis later on.[7] Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Researcher_degrees_of_freedom
Exemptionmay refer to:
https://en.wikipedia.org/wiki/Exemption_(disambiguation)
Incomputer programming,array slicingis an operation that extracts a subset of elements from anarrayand packages them as another array, possibly in a differentdimensionfrom the original. Common examples of array slicing are extracting a substring from astringof characters, the "ell" in "hello", extracting a row or column from a two-dimensional array, or extracting avectorfrom amatrix. Depending on theprogramming language, an array slice can be made out of non-consecutive elements. Also depending on the language, the elements of the new array may bealiased to(i.e., share memory with) those of the original array. For "one-dimensional" (single-indexed) arrays – vectors, sequences, strings etc. – the most common slicing operation is extraction of zero or more consecutive elements. If we have a vector containing elements (2, 5, 7, 3, 8, 6, 4, 1), and want to create an array slice from the 3rd to the 6th elements, we get (7, 3, 8, 6). Inprogramming languagesthat use a 0-based indexing scheme, the slice would be from index2to5. Reducing the range of any index to a single value effectively removes the need for that index. This feature can be used, for example, to extract one-dimensional slices (vectors in 3D, including rows, columns, and tubes[1]) or two-dimensional slices (rectangular matrices) from a three-dimensional array. However, since the range can be specified at run-time, type-checked languages may require an explicit (compile-time) notation to actually eliminate the trivial indices. General array slicing can be implemented (whether or not built into the language) by referencing every array through adope vectorordescriptor– a record that contains the address of the first array element, and then the range of each index and the corresponding coefficient in the indexing formula. This technique also allows immediate arraytransposition, index reversal, subsampling, etc. For languages likeC, where the indices always start at zero, the dope vector of an array withdindices has at least 1 + 2dparameters. For languages that allow arbitrary lower bounds for indices, likePascal, the dope vector needs 1 + 3dentries. If the array abstraction does not support true negative indices (as the arrays ofAdaandPascaldo), then negative indices for the bounds of the slice for a given dimension are sometimes used to specify an offset from the end of the array in that dimension. In 1-based schemes, -1 generally indicates the second-to-last item, while in a 0-based system, it refers to the very last item. The concept of slicing was surely known even before the invention ofcompilers. Slicing as a language feature probably started withFORTRAN(1957), more as a consequence of non-existent type and range checking than by design. The concept was also alluded to in the preliminary report for theIAL(ALGOL 58) in that the syntax allowed one or more indices of an array element (or, for that matter, of a procedure call) to be omitted when used as an actual parameter. Kenneth Iverson'sAPL(1957) had very flexible multi-dimensional array slicing, which contributed much to the language's expressive power and popularity. ALGOL 68(1968) introduced comprehensive multi-dimension array slicing and trimming features. Array slicing facilities have been incorporated in several modern languages, such asAda,Cobra,D,Fortran 90,Go,Rust,Julia,MATLAB,Perl,Python,S-Lang,Windows PowerShelland the mathematical/statistical languagesGNU Octave,SandR. PL/I provides two facilities for array slicing. A reference toY(2)is a reference toX(2,2), and so on. The Fortran 66 programmers were only able to take advantage of slicing matrices by row, and then only when passing that row to asubroutine: Result: Note that there is nodope vectorin FORTRAN 66 hence the length of the slice must also be passed as an argument - or some other means - to theSUBROUTINE. 1970sPascalandChad similar restrictions. Algol68 final report contains an early example of slicing, slices are specified in the form: or: Both bounds are inclusive and can be omitted, in which case they default to the declared array bounds. Neither the stride facility, nor diagonal slice aliases are part of the revised report. Examples: HP'sHP 2000systems, introduced in November 1968, usedHP Time-Shared BASICas their primary interface and programming language. This version of BASIC used slicing for most string manipulation operations. One oddity of the language was that it allowed round or square braces interchangeably, and which was used in practice was typically a function of thecomputer terminalbeing used. Example: Will produce: The HP systems were widely used in the early 1970s, especially in technicalhigh schoolsand many small industrial and scientific settings.[3]As the firstmicrocomputersemerged in the mid-1970s, HP was often used as the pattern for their BASIC dialects as well. Notable examples include 1977'sApple BASIC, 1978'sAtari BASIC, and 1979'sSinclair BASIC. This style of manipulation generally offers advantages in terms of memory use, and was often chosen on systems that shipped with small amounts of memory. Only Sinclair's dialect differed in any meaningful way, using theTOkeyword instead of a comma-separated list: Slicing was also selected as the basis for theANSIFull BASICstandard, using the colon as the separator and thus differentiating between slicing and array access: While this style of access offered a number of advantages, especially for the small machines of the era, sometime after 1970Digital Equipment Corporationintroduced their own variation of BASIC that used theLEFT$,RIGHT$andMID$string functions.Microsoft BASICwas written on thePDP-10and its BASIC was used as the pattern. Through the late 1970s the two styles were both widely used, but by the early 1980s the DEC-style functions were thede factostandard. The:operator implements the stride syntax (lower_bound:upper_bound[:stride]) by generating a vector.1:5evaluates as[1, 2, 3, 4, 5].1:9:2evaluates as[1, 3, 5, 7, 9]. A bare:evaluates the same as1:end, withenddetermined by context. Arrays inSandGNU Rare always one-based, thus the indices of a new slice will begin withonefor each dimension, regardless of the previous indices. Dimensions with length ofonewill be dropped (unless drop = FALSE). Dimension names (where present) will be preserved. The Fortran 77 standard introduced the ability to slice andconcatenatestrings: Produces: Such strings could be passed byreferenceto another subroutine, the length would also be passed transparently to the subroutine as a kind ofshortdope vector. Again produces: Ada 83 supports slices for all array types. LikeFortran 77such arrays could be passed byreferenceto another subroutine, the length would also be passed transparently to the subroutine as a kind ofshortdope vector. Produces: Note:Since in Ada indices are n-based the termText (2 .. 4)will result in an Array with the base index of 2. The definition forText_IO.Put_Lineis: The definition forStringis: As Ada supports true negative indices as intypeHistory_Data_Arrayisarray(-6000..2010)ofHistory_Data;it places no special meaning on negative indices. In the example above the termSome_History_Data (-30 .. 30)would slice theHistory_Datafrom 31BCto 30AD(since there was no year zero, the year number 0 actually refers to 1BC). If we have as above, then the first 3 elements, middle 3 elements and last 3 elements would be: Perl supports negative list indices. The -1 index is the last element, -2 the penultimate element, etc. In addition, Perl supports slicing based on expressions, for example: If you have the following list: Then it is possible to slice by using a notation similar to element retrieval: Note that Python allows negative list indices. The index -1 represents the last element, -2 the penultimate element, etc. Python also allows a step property by appending an extra colon and a value. For example: The stride syntax (nums[1:5:2]) was introduced in the second half of the 1990s, as a result of requests put forward by scientific users in the Python "matrix-SIG" (special interest group).[4] Slice semantics potentially differ per object; new semantics can be introduced whenoperator overloadingthe indexing operator. With Python standard lists (which aredynamic arrays), every slice is a copy. Slices ofNumPyarrays, by contrast, are views onto the same underlying buffer. In Fortran 90, slices are specified in the form Both bounds are inclusive and can be omitted, in which case they default to the declared array bounds. Stride defaults to 1. Example: Each dimension of an array value in Analytica is identified by an Index variable. When slicing or subscripting, the syntax identifies the dimension(s) over which you are slicing or subscripting by naming the dimension. Such as: Naming indexes in slicing and subscripting is similar to naming parameters in function calls instead of relying on a fixed sequence of parameters. One advantage of naming indexes in slicing is that the programmer does not have to remember the sequence of Indexes, in a multidimensional array. A deeper advantage is that expressions generalize automatically and safely without requiring a rewrite when the number of dimensions of X changes. Array slicing was introduced in version 1.0. Earlier versions did not support this feature. Suppose that A is a 1-d array such as Then an array B of first 5 elements of A may be created using Similarly, B may be assigned to an array of the last 5 elements of A via: Other examples of 1-d slicing include: Slicing of higher-dimensional arrays works similarly: Array indices can also be arrays of integers. For example, suppose thatI = [0:9]is an array of 10 integers. ThenA[I]is equivalent to an array of the first 10 elements ofA. A practical example of this is a sorting operation such as: Consider the array: Take a slice out of it: and the contents ofbwill be[7, 3, 8]. The first index of the slice is inclusive, the second is exclusive. means that the dynamic arraycnow contains[8, 6]because inside the [] the$symbol refers to the length of the array. D array slices are aliased to the original array, so: means thatanow has the contents[2, 5, 7, 3, 10, 6, 4, 1]. To create a copy of the array data, instead of only an alias, do: Unlike Python, D slice bounds don't saturate, so code equivalent to this Python code is an error in D: The programming languageSuperColliderimplements some concepts fromJ/APL. Slicing looks as follows: Arrays infishare always one-based, thus the indices of a new slice will begin withone, regardless of the previous indices. Cobra supports Python-style slicing. If you have a list then the first 3 elements, middle 3 elements, and last 3 elements would be: Cobra also supports slicing-style syntax for 'numeric for loops': Arrays are zero-based in PowerShell and can be defined using the comma operator: Go supports Python-style syntax for slicing (except negative indices are not supported). Arrays and slices can be sliced. If you have a slice then the first 3 elements, middle 3 elements, last 3 elements, and a copy of the entire slice would be: Slices in Go are reference types, which means that different slices may refer to the same underlying array. Cilk Plus supports syntax for array slicing as an extension to C and C++. Cilk Plus slicing looks as follows: Cilk Plus's array slicing differs from Fortran's in two ways: Julia array slicingis like that ofMATLAB, but uses square brackets. Example:
https://en.wikipedia.org/wiki/Array_slicing
Computer-aided software engineering(CASE) is a domain of software tools used to design and implement applications. CASE tools are similar to and are partly inspired bycomputer-aided design(CAD) tools used for designing hardware products. CASE tools are intended to help develop high-quality, defect-free, and maintainable software.[1]CASE software was often associated with methods for the development ofinformation systemstogether with automated tools that could be used in thesoftware development process.[2] The Information System Design and Optimization System (ISDOS) project, started in 1968 at theUniversity of Michigan, initiated a great deal of interest in the whole concept of using computer systems to help analysts in the very difficult process of analysing requirements and developing systems. Several papers by Daniel Teichroew fired a whole generation of enthusiasts with the potential of automated systems development. His Problem Statement Language / Problem Statement Analyzer (PSL/PSA) tool was a CASE tool although it predated the term.[3] Another major thread emerged as a logical extension to thedata dictionaryof adatabase. By extending the range ofmetadataheld, the attributes of an application could be held within a dictionary and used at runtime. This "active dictionary" became the precursor to the more modernmodel-driven engineeringcapability. However, the active dictionary did not provide a graphical representation of any of the metadata. It was the linking of the concept of a dictionary holding analysts' metadata, as derived from the use of an integrated set of techniques, together with the graphical representation of such data that gave rise to the earlier versions of CASE.[4] The next entrant into the market was Excelerator from Index Technology in Cambridge, Mass. While DesignAid ran on Convergent Technologies and later Burroughs Ngen networked microcomputers, Index launched Excelerator on theIBM PC/ATplatform. While, at the time of launch, and for several years, the IBM platform did not support networking or a centralized database as did the Convergent Technologies or Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence. Hot on the heels of Excelerator were a rash of offerings from companies such as Knowledgeware (James Martin,Fran Tarkentonand Don Addington), Texas Instrument'sCA GenandAndersen Consulting'sFOUNDATION toolset (DESIGN/1, INSTALL/1, FCP).[5] CASE tools were at their peak in the early 1990s.[6]According to thePC Magazineof January 1990, over 100 companies were offering nearly 200 different CASE tools.[5]At the timeIBMhad proposed AD/Cycle, which was an alliance of software vendors centered on IBM'sSoftware repositoryusingIBM DB2inmainframeandOS/2: With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening the market for the mainstream CASE tools of today. Many of the leaders of the CASE market of the early 1990s ended up being purchased byComputer Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett Management Systems (LBMS). The other trend that led to the evolution of CASE tools was the rise of object-oriented methods and tools. Most of the various tool vendors added some support for object-oriented methods and tools. In addition new products arose that were designed from the bottom up to support the object-oriented approach. Andersen developed its project Eagle as an alternative to Foundation. Several of the thought leaders in object-oriented development each developed their own methodology and CASE tool set: Jacobson, Rumbaugh,Booch, etc. Eventually, these diverse tool sets and methods were consolidated via standards led by theObject Management Group(OMG). The OMG'sUnified Modelling Language(UML) is currently widely accepted as the industry standard for object-oriented modeling.[citation needed] CASE tools support specific tasks in the software development life-cycle. They can be divided into the following categories: Another common way to distinguish CASE tools is the distinction between Upper CASE and Lower CASE. Upper CASE Tools support business and analysis modeling. They support traditional diagrammatic languages such asER diagrams,Data flow diagram,Structure charts,Decision Trees,Decision tables, etc. Lower CASE Tools support development activities, such as physical design, debugging, construction, testing, component integration, maintenance, and reverse engineering. All other activities span the entire life-cycle and apply equally to upper and lower CASE.[8] Workbenches integrate two or more CASE tools and support specific software-process activities. Hence they achieve: An example workbench is Microsoft'sVisual Basicprogramming environment. It incorporates several development tools: a GUI builder, a smart code editor, debugger, etc. Most commercial CASE products tended to be such workbenches that seamlessly integrated two or more tools. Workbenches also can be classified in the same manner as tools; as focusing on Analysis, Development, Verification, etc. as well as being focused on the upper case, lower case, or processes such as configuration management that span the complete life-cycle. An environment is a collection of CASE tools or workbenches that attempts to support the complete software process. This contrasts with tools that focus on one specific task or a specific part of the life-cycle. CASE environments are classified by Fuggetta as follows:[9] In practice, the distinction between workbenches and environments was flexible. Visual Basic for example was a programming workbench but was also considered a 4GL environment by many. The features that distinguished workbenches from environments were deep integration via a shared repository or common language and some kind of methodology (integrated and process-centered environments) or domain (4GL) specificity.[9] Some of the most significant risk factors for organizations adopting CASE technology include:
https://en.wikipedia.org/wiki/Computer-aided_software_engineering
Theopen music modelis an economic and technological framework for therecording industrybased on research conducted at theMassachusetts Institute of Technology. It predicts that the playback of prerecorded music will be regarded as aservicerather than asindividually sold products, and that the only system for thedigital distributionof music that will be viable against piracy is asubscription-based system supportingfile sharingand free ofdigital rights management. The research also indicated thatUS$9 per month for unlimited use would be themarket clearingprice at that time, but recommended $5 per month as the long-term optimal price.[1] Since its creation in 2002, a number of its principles have been adopted throughout the recording industry,[2]and it has been cited as the basis for the business model of manymusic subscription services.[3][4] The model asserts that there are fivenecessaryrequirements for a viable commercial music digital distribution network: The model was proposed byShuman Ghosemajumderin his 2002 research paperAdvanced Peer-Based Technology Business Models[1]at theMIT Sloan School of Management. It was the first of several studies that found significant demand for online, open music sharing systems.[5]The following year, it was publicly referred to as the Open Music Model.[6] The model suggests changing the way consumers interact with the digital property market: rather than being seen as a good to be purchased from online vendors, music would be treated as a service being provided by the industry, with firms based on the model serving as intermediaries between the music industry and its consumers. The model proposed giving consumers unlimited access to music for the price of$5 per month[1]($9 in 2024), based on research showing that this could be a long-term optimal price, expected to bring in a total revenue of overUS$3 billion per year.[1] The research demonstrated the demand for third-party file sharing programs. Insofar as the interest for a particular piece of digital property is high, and the risk of acquiring the good via illegitimate means is low, people will naturally flock towards third-party services such asNapsterandMorpheus(more recently,BittorrentandThe Pirate Bay).[1] The research showed that consumers would use file sharing services not primarily due to cost but because of convenience, indicating that services which provided access to the most music would be the most successful.[1] The model predicted the failure ofonline music distributionsystems based ondigital rights management.[6][7] Criticisms of the model included that it would not eliminate the issue of piracy.[8]Others countered that it was in fact the most viable solution to piracy,[9]since piracy was "inevitable".[10]Supporters argued that it offered a superior alternative to the currentlaw-enforcement based methodsused by the recording industry.[11]One startup in Germany, Playment, announced plans to adapt the entire model to a commercial setting as the basis for its business model.[12] Several aspects of the model have been adopted by the recording industry and its partners over time: Why would the big four music companies agree to let Apple and others distribute their music without using DRM systems to protect it? The simplest answer is because DRMs haven't worked, and may never work, to halt music piracy.
https://en.wikipedia.org/wiki/Open_music_model
Incomputer science, anexponential search(also calleddoubling searchorgalloping searchorStruzik search)[1]is analgorithm, created byJon BentleyandAndrew Chi-Chih Yaoin 1976, for searching sorted, unbounded/infinite lists.[2]There are numerous ways to implement this, with the most common being to determine a range that the search key resides in and performing abinary searchwithin that range. This takesO(log⁡i){\displaystyle O(\log i)}time, wherei{\displaystyle i}is the position of the search key in the list, if the search key is in the list, or the position where the search key should be, if the search key is not in the list. Exponential search can also be used to search in bounded lists. Exponential search can even out-perform more traditional searches for bounded lists, such as binary search, when the element being searched for is near the beginning of the array. This is because exponential search will run inO(log⁡i){\displaystyle O(\log i)}time, wherei{\displaystyle i}is the index of the element being searched for in the list, whereas binary search would run inO(log⁡n){\displaystyle O(\log n)}time, wheren{\displaystyle n}is the number of elements in the list. Exponential search allows for searching through a sorted, unbounded list for a specified input value (the search "key"). The algorithm consists of two stages. The first stage determines a range in which the search key would reside if it were in the list. In the second stage, a binary search is performed on this range. In the first stage, assuming that the list is sorted in ascending order, the algorithm looks for the firstexponent,j, where the value 2jis greater than the search key. This value, 2jbecomes the upper bound for the binary search with the previous power of 2, 2j - 1, being the lower bound for the binary search.[3] In each step, the algorithm compares the search key value with the key value at the current search index. If the element at the current index is smaller than the search key, the algorithm repeats, skipping to the next search index by doubling it, calculating the next power of 2.[3]If the element at the current index is larger than the search key, the algorithm now knows that the search key, if it is contained in the list at all, is located in the interval formed by the previous search index, 2j - 1, and the current search index, 2j. The binary search is then performed with the result of either a failure, if the search key is not in the list, or the position of the search key in the list. The first stage of the algorithm takesO(log⁡i){\displaystyle O(\log i)}time, wherei{\displaystyle i}is the index where the search key would be in the list. This is because, in determining the upper bound for the binary search, the while loop is executed exactly⌈log⁡(i)⌉{\displaystyle \lceil \log(i)\rceil }times. Since the list is sorted, after doubling the search index⌈log⁡(i)⌉{\displaystyle \lceil \log(i)\rceil }times, the algorithm will be at a search index that is greater than or equal toias2⌈log⁡(i)⌉≥i{\displaystyle 2^{\lceil \log(i)\rceil }\geq i}. As such, the first stage of the algorithm takesO(log⁡i){\displaystyle O(\log i)}time. The second part of the algorithm also takesO(log⁡i){\displaystyle O(\log i)}time. As the second stage is simply a binary search, it takesO(log⁡n){\displaystyle O(\log n)}wheren{\displaystyle n}is the size of the interval being searched. The size of this interval would be 2j- 2j- 1where, as seen above,j=log⁡i{\displaystyle \log i}. This means that the size of the interval being searched is 2logi- 2logi- 1= 2logi- 1. This gives us a runtime of log (2logi- 1) = log (i) - 1 =O(log⁡i){\displaystyle O(\log i)}. This gives the algorithm a total runtime, calculated by summing the runtimes of the two stages, ofO(log⁡i){\displaystyle O(\log i)}+O(log⁡i){\displaystyle O(\log i)}= 2O(log⁡i){\displaystyle O(\log i)}=O(log⁡i){\displaystyle O(\log i)}. Bentley and Yao suggested several variations for exponential search.[2]These variations consist of performing a binary search, as opposed to a unary search, when determining the upper bound for the binary search in the second stage of the algorithm. This splits the first stage of the algorithm into two parts, making the algorithm a three-stage algorithm overall. The new first stage determines a valuej′{\displaystyle j'}, much like before, such that2j′{\displaystyle 2^{j'}}is larger than the search key and2j′/2{\displaystyle 2^{j'/2}}is lower than the search key. Previously,j′{\displaystyle j'}was determined in a unary fashion by calculating the next power of 2 (i.e., adding 1 toj). In the variation, it is proposed thatj′{\displaystyle j'}is doubled instead (e.g., jumping from 22to 24as opposed to 23). The firstj′{\displaystyle j'}such that2j′{\displaystyle 2^{j'}}is greater than the search key forms a much rougher upper bound than before. Once thisj′{\displaystyle j'}is found, the algorithm moves to its second stage and a binary search is performed on the interval formed byj′/2{\displaystyle j'/2}andj′{\displaystyle j'}, giving the more accurate upper bound exponentj. From here, the third stage of the algorithm performs the binary search on the interval 2j- 1and 2j, as before. The performance of this variation is⌊log⁡i⌋+2⌊log⁡(⌊log⁡i⌋+1)⌋+1=O(log⁡i){\displaystyle \lfloor \log i\rfloor +2\lfloor \log(\lfloor \log i\rfloor +1)\rfloor +1=O(\log i)}. Bentley and Yao generalize this variation into one where any number,k, of binary searches are performed during the first stage of the algorithm, giving thek-nested binary search variation. The asymptotic runtime does not change for the variations, running inO(log⁡i){\displaystyle O(\log i)}time, as with the original exponential search algorithm. Also, a data structure with a tight version of thedynamic finger propertycan be given when the above result of thek-nested binary search is used on a sorted array.[4]Using this, the number of comparisons done during a search is log (d) + log log (d) + ... +O(log*d), wheredis the difference in rank between the last element that was accessed and the current element being accessed. An algorithm based on exponentially increasing the search band solvesglobal pairwise alignmentforO(ns){\displaystyle O(ns)}, wheren{\displaystyle n}is the length of the sequences ands{\displaystyle s}is theedit distancebetween them.[5][6]
https://en.wikipedia.org/wiki/Exponential_search
Morphological derivation, inlinguistics, is the process of forming a new word from an existing word, often by adding aprefixorsuffix, such asun-or-ness.For example,unhappyandhappinessderive from theroot wordhappy. It is differentiated frominflection, which is the modification of a word to form differentgrammatical categorieswithout changing its core meaning:determines,determining, anddeterminedare from the rootdetermine.[1] Derivationalmorphologyoften involves the addition of a derivational suffix or otheraffix. Such an affix usually applies towordsof onelexical category(part of speech) and changes them into words of another such category. For example, one effect of theEnglishderivational suffix-lyis to change anadjectiveinto anadverb(slow→slowly). Here are examples of English derivational patterns and their suffixes: However, derivational affixes do not necessarily alter the lexical category; they may change merely the meaning of the base and leave the category unchanged. A prefix (write→re-write;lord→over-lord) rarely changes the lexical category in English. The prefixun-applies to adjectives (healthy→unhealthy) and some verbs (do→undo) but rarely to nouns. A few exceptions are the derivational prefixesen-andbe-.En-(replaced byem-beforelabials) is usually a transitive marker on verbs, but it can also be applied to adjectives and nouns to form transitive verbs:circle(verb) →encircle(verb) butrich(adj) →enrich(verb),large(adj) →enlarge(verb),rapture(noun) →enrapture(verb),slave(noun) →enslave(verb). When derivation occurs without any change to the word, such as in the conversion of the nounbreakfastinto the verbto breakfast, it's known asconversion, or zero derivation. Derivation that results in a noun may be callednominalization. It may involve the use of an affix (such as withemploy → employee), or it may occur via conversion (such as with the derivation of the nounrunfrom the verbto run). In contrast, a derivation resulting in a verb may be called verbalization (such as from the nounbutterto the verbto butter). Some words have specific exceptions to these patterns. For example,inflammableactually meansflammable,andde-evolutionis spelled with only onee,asdevolution. Derivation can be contrasted withinflection, in that derivation produces a new word (a distinctlexeme), whereas inflection produces grammatical variants (or forms) of the same word. Generally speaking, inflection applies in more or less regular patterns to all members of apart of speech(for example, nearly everyEnglish verbadds-sfor the third person singular present tense), while derivation follows less consistent patterns (for example, thenominalizingsuffix-itycan be used with the adjectivesmodernanddense, but not withopenorstrong). However, derivations and inflections can share homonyms, that being,morphemesthat have the different sound, but not the same meaning. For example, when the affix-eris added to an adjective, as insmall-er, it acts as an inflection, but when added to a verb, as incook-er, it acts as a derivation.[2] A derivation can produce a lexeme with a different part of speech but does not necessarily. For example, the derivation of the worduncommonfromcommon+un-(a derivational morpheme) does not change its part of speech (both are adjectives). An important distinction between derivational and inflectional morphology lies in the content/function of a listeme[clarification needed]. Derivational morphology changes both the meaning and the content of a listeme, while inflectional morphology doesn't change the meaning, but changes the function. A non-exhaustive list of derivational morphemes in English:-ful, -able, im-, un-, -ing, -er. A non-exhaustive list of inflectional morphemes in English:-er, -est, -ing, -en, -ed, -s. Derivation can be contrasted with other types ofword formationsuch as compounding. Derivational affixes arebound morphemes– they are meaningful units, but can only normally occur when attached to another word. In that respect, derivation differs fromcompoundingby whichfreemorphemes are combined (lawsuit,Latin professor). It also differs frominflectionin that inflection does not create newlexemesbut newword forms(table→tables;open→opened). Derivational patterns differ in the degree to which they can be calledproductive. A productive pattern or affix is one that is commonly used to produce novel forms. For example, the negating prefixun-is more productive in English than the alternativein-; both of them occur in established words (such asunusualandinaccessible), but faced with a new word which does not have an established negation, a native speaker is more likely to create a novel form withun-than within-. The same thing happens with suffixes. For example, if comparing two wordsThatcheriteandThatcherist, the analysis shows that both suffixes-iteand-istare productive and can be added to proper names, moreover, both derived adjectives are established and have the same meaning. But the suffix-istis more productive and, thus, can be found more often in word formation not only from proper names.
https://en.wikipedia.org/wiki/Morphological_derivation
Vulnerabilitiesare flaws or weaknesses in a system's design, implementation, or management that can be exploited by a malicious actor to compromise its security. Despite intentions to achieve complete correctness, virtually all hardware and software contain bugs where the system does not behave as expected. If the bug could enable an attacker to compromise the confidentiality, integrity, or availability of system resources, it is called a vulnerability. Insecuresoftware developmentpractices as well as design factors such as complexity can increase the burden of vulnerabilities. There are different types most common in different components such as hardware, operating systems, and applications. Vulnerability managementis a process that includes identifying systems and prioritizing which are most important, scanning for vulnerabilities, and taking action to secure the system. Vulnerability management typically is a combination of remediation (fixing the vulnerability), mitigation (increasing the difficulty or reducing the danger of exploits), and accepting risks that are not economical or practical to eliminate. Vulnerabilities can be scored for risk according to theCommon Vulnerability Scoring Systemor other systems, and added to vulnerability databases. As of November 2024[update], there are more than 240,000 vulnerabilities[1]catalogued in theCommon Vulnerabilities and Exposures(CVE) database. A vulnerability is initiated when it is introduced into hardware or software. It becomes active and exploitable when the software or hardware containing the vulnerability is running. The vulnerability may be discovered by the vendor or a third party. Disclosing the vulnerability (as apatchor otherwise) is associated with an increased risk of compromise because attackers often move faster than patches are rolled out. Regardless of whether a patch is ever released to remediate the vulnerability, its lifecycle will eventually end when the system, or older versions of it, fall out of use. Despite developers' goal of delivering a product that works entirely as intended, virtually allsoftwareandhardwarecontain bugs.[2]If a bug creates a security risk, it is called a vulnerability.[3][4][5]Software patchesare often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation.[6]Vulnerabilities vary in their ability to beexploitedby malicious actors,[3]and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system.[7]Although some vulnerabilities can only be used fordenial of serviceattacks, more dangerous ones allow the attacker toinjectand run their own code (calledmalware), without the user being aware of it.[3]Only a minority of vulnerabilities allow forprivilege escalation, which is necessary for more severe attacks.[8]Without a vulnerability, the exploit cannot gain access.[9]It is also possible formalwareto be installed directly, without an exploit, if the attacker usessocial engineeringor implants the malware in legitimate software that is downloaded deliberately.[10] Fundamental design factors that can increase the burden of vulnerabilities include: Somesoftware developmentpractices can affect the risk of vulnerabilities being introduced to a code base. Lack of knowledge about secure software development or excessive pressure to deliver features quickly can lead to avoidable vulnerabilities to enter production code, especially if security is not prioritized by thecompany culture. This can lead to unintended vulnerabilities. The more complex the system is, the easier it is for vulnerabilities to go undetected. Some vulnerabilities are deliberately planted, which could be for any reason from a disgruntled employee selling access to cyber criminals, to sophisticated state-sponsored schemes to introduce vulnerabilities to software.[15]Inadequatecode reviewscan lead to missed bugs, but there are alsostatic code analysistools that can be used as part of code reviews and may find some vulnerabilities.[16] DevOps, a development workflow that emphasizes automated testing and deployment to speed up the deployment of new features, often requires that many developers be granted access to change configurations, which can lead to deliberate or inadvertent inclusion of vulnerabilities.[17]Compartmentalizing dependencies, which is often part of DevOps workflows, can reduce theattack surfaceby paring down dependencies to only what is necessary.[18]Ifsoftware as a serviceis used, rather than the organization's own hardware and software, the organization is dependent on the cloud services provider to prevent vulnerabilities.[19] TheNational Vulnerability Databaseclassifies vulnerabilities into eight root causes that may be overlapping, including:[20] Deliberate security bugs can be introduced during or after manufacturing and cause theintegrated circuitnot to behave as expected under certain specific circumstances. Testing for security bugs in hardware is quite difficult due to limited time and the complexity of twenty-first century chips,[23]while the globalization of design and manufacturing has increased the opportunity for these bugs to be introduced by malicious actors.[24] Althoughoperating system vulnerabilitiesvary depending on theoperating systemin use, a common problem isprivilege escalationbugs that enable the attacker to gain more access than they should be allowed.Open-sourceoperating systems such asLinuxandAndroidhave a freely accessiblesource codeand allow anyone to contribute, which could enable the introduction of vulnerabilities. However, the same vulnerabilities also occur in proprietary operating systems such asMicrosoft WindowsandApple operating systems.[25]All reputable vendors of operating systems provide patches regularly.[26] Client–server applicationsare downloaded onto the end user's computers and are typically updated less frequently than web applications. Unlike web applications, they interact directly with a user'soperating system. Common vulnerabilities in these applications include:[27] Web applicationsrun on many websites. Because they are inherently less secure than other applications, they are a leading source ofdata breachesand other security incidents.[28][29]They can include: Attacks used against vulnerabilities in web applications include: There is little evidence about the effectiveness and cost-effectiveness of different cyberattack prevention measures.[32]Although estimating the risk of an attack is not straightforward, the mean time to breach and expected cost can be considered to determine the priority for remediating or mitigating an identified vulnerability and whether it is cost effective to do so.[33]Although attention to security can reduce the risk of attack, achieving perfect security for a complex system is impossible, and many security measures have unacceptable cost or usability downsides.[34]For example, reducing the complexity and functionality of the system is effective at reducing theattack surface.[35] Successful vulnerability management usually involves a combination of remediation (closing a vulnerability), mitigation (increasing the difficulty, and reducing the consequences, of exploits), and accepting some residual risk. Often adefense in depthstrategy is used for multiple barriers to attack.[36]Some organizations scan for only the highest-risk vulnerabilities as this enables prioritization in the context of lacking the resources to fix every vulnerability.[37]Increasing expenses is likely to havediminishing returns.[33] Remediation fixes vulnerabilities, for example by downloading asoftware patch.[38]Software vulnerability scannersare typically unable to detect zero-day vulnerabilities, but are more effective at finding known vulnerabilities based on a database. These systems can find some known vulnerabilities and advise fixes, such as a patch.[39][40]However, they have limitations includingfalse positives.[38] Vulnerabilities can only be exploited when they are active-the software in which they are embedded is actively running on the system.[41]Before the code containing the vulnerability is configured to run on the system, it is considered a carrier.[42]Dormant vulnerabilities can run, but are not currently running. Software containing dormant and carrier vulnerabilities can sometimes be uninstalled or disabled, removing the risk.[43]Active vulnerabilities, if distinguished from the other types, can be prioritized for patching.[41] Vulnerability mitigation is measures that do not close the vulnerability, but make it more difficult to exploit or reduce the consequences of an attack.[44]Reducing theattack surface, particularly for parts of the system withroot(administrator) access, and closing off opportunities for exploits to engage inprivilege exploitationis a common strategy for reducing the harm that a cyberattack can cause.[38]If a patch for third-party software is unavailable, it may be possible to temporarily disable the software.[45] Apenetration testattempts to enter the system via an exploit to see if the system is insecure.[46]If a penetration test fails, it does not necessarily mean that the system is secure.[47]Some penetration tests can be conducted with automated software that tests against existing exploits for known vulnerabilities.[48]Other penetration tests are conducted by trained hackers. Many companies prefer to contract out this work as it simulates an outsider attack.[47] The vulnerability lifecycle begins when vulnerabilities are introduced into hardware or software.[49]Detection of vulnerabilities can be by the software vendor, or by a third party. In the latter case, it is considered most ethical to immediately disclose the vulnerability to the vendor so it can be fixed.[50]Government or intelligence agencies buy vulnerabilities that have not been publicly disclosed and may use them in an attack, stockpile them, or notify the vendor.[51]As of 2013, theFive Eyes(United States, United Kingdom, Canada, Australia, and New Zealand) captured the plurality of the market and other significant purchasers included Russia, India, Brazil, Malaysia, Singapore, North Korea, and Iran.[52]Organized criminal groups also buy vulnerabilities, although they typically preferexploit kits.[53] Even vulnerabilities that are publicly known or patched are often exploitable for an extended period.[54][55]Security patches can take months to develop,[56]or may never be developed.[55]A patch can have negative effects on the functionality of software[55]and users may need totestthe patch to confirm functionality and compatibility.[57]Larger organizations may fail to identify and patch all dependencies, while smaller enterprises and personal users may not install patches.[55]Research suggests that risk of cyberattack increases if the vulnerability is made publicly known or a patch is released.[58]Cybercriminals canreverse engineerthe patch to find the underlying vulnerability and develop exploits,[59]often faster than users install the patch.[58] Vulnerabilities become deprecated when the software or vulnerable versions fall out of use.[50]This can take an extended period of time; in particular, industrial software may not be feasible to replace even if the manufacturer stops supporting it.[60] A commonly used scale for assessing the severity of vulnerabilities is the open-source specificationCommon Vulnerability Scoring System(CVSS). CVSS evaluates the possibility to exploit the vulnerability and compromise data confidentiality, availability, and integrity. It also considers how the vulnerability could be used and how complex an exploit would need to be. The amount of access needed for exploitation and whether it could take place without user interaction are also factored in to the overall score.[61][62] Someone who discovers a vulnerability may disclose it immediately (full disclosure) or wait until a patch has been developed (responsible disclosure, or coordinated disclosure). The former approach is praised for its transparency, but the drawback is that the risk of attack is likely to be increased after disclosure with no patch available.[63]Some vendors paybug bountiesto those who report vulnerabilities to them.[64][65]Not all companies respond positively to disclosures, as they can cause legal liability and operational overhead.[66]There is no law requiring disclosure of vulnerabilities.[67]If a vulnerability is discovered by a third party that does not disclose to the vendor or the public, it is called azero-day vulnerability, often considered the most dangerous type because fewer defenses exist.[68] The most commonly used vulnerability dataset isCommon Vulnerabilities and Exposures(CVE), maintained byMitre Corporation.[69]As of November 2024[update], it has over 240,000 entries[1]This information is shared into other databases, including the United States'National Vulnerability Database,[69]where each vulnerability is given a risk score usingCommon Vulnerability Scoring System(CVSS),Common Platform Enumeration(CPE) scheme, andCommon Weakness Enumeration.[citation needed]CVE and other databases typically do not track vulnerabilities insoftware as a serviceproducts.[39]Submitting a CVE is voluntary for companies that discovered a vulnerability.[67] The software vendor is usually not legally liable for the cost if a vulnerability is used in an attack, which creates an incentive to make cheaper but less secure software.[70]Some companies are covered by laws, such asPCI,HIPAA, andSarbanes-Oxley, that place legal requirements on vulnerability management.[71]
https://en.wikipedia.org/wiki/Vulnerability_(computing)
Digital cinemais thedigitaltechnology used within thefilm industrytodistributeorprojectmotion picturesas opposed to the historical use of reels ofmotion picture film, such as35 mm film. Whereas film reels have to be shipped tomovie theaters, a digital movie can be distributed to cinemas in a number of ways: over theInternetor dedicatedsatellitelinks, or by sendinghard drivesoroptical discssuch asBlu-raydiscs, then projected using a digital video projector instead of afilm projector. Typically, digital movies are shot usingdigital movie camerasor in animation transferred from a file and are edited using anon-linear editing system(NLE). The NLE is often a video editing application installed in one or more computers that may be networked to access the original footage from a remote server, share or gain access to computing resources for rendering the final video, and allow several editors to work on the same timeline or project. Alternatively a digital movie could be a film reel that has been digitized using amotion picture film scannerand then restored, or, a digital movie could be recorded using afilm recorderonto film stock for projection using a traditional film projector. Digital cinema is distinct fromhigh-definition televisionand does not necessarily use traditional television or other traditionalhigh-definition videostandards, aspect ratios, or frame rates. In digital cinema, resolutions are represented by the horizontal pixel count, usually2K(2048×1080 or 2.2megapixels) or4K(4096×2160 or 8.8 megapixels). The 2K and 4K resolutions used in digital cinema projection are often referred to as DCI 2K and DCI 4K. DCI stands for Digital Cinema Initiatives. As digital cinema technology improved in the early 2010s, most theaters across the world converted to digital video projection. Digital cinema technology has continued to develop over the years with 3D, RPX, 4DX and ScreenX, allowing moviegoers more immersive experiences. The transition from film todigital videowas preceded by cinema's transition from analog todigital audio, with the release of theDolby Digital(AC-3)audio coding standardin 1991.[1]Its main basis is themodified discrete cosine transform(MDCT), alossyaudio compressionalgorithm.[2]It is a modification of thediscrete cosine transform(DCT) algorithm, which was first proposed byNasir Ahmedin 1972 and was originally intended forimage compression.[3]The DCT was adapted into the MDCT by J.P. Princen, A.W. Johnson and Alan B. Bradley at theUniversity of Surreyin 1987,[4]and thenDolby Laboratoriesadapted the MDCT algorithm along withperceptual codingprinciples to develop the AC-3 audio format for cinema needs.[1]Cinema in the 1990stypically combined analog photochemical images with digital audio. Digital media playback of high-resolution 2K files has at least a 20-year history. Early video data storage units (RAIDs) fed custom frame buffer systems with large memories. In early digital video units, the content was usually restricted to several minutes of material. Transfer of content between remote locations was slow and had limited capacity. It was not until the late 1990s that feature-length films could be sent over the "wire" (Internet or dedicated fiber links). On October 23, 1998,Digital light processing(DLP) projector technology was publicly demonstrated with the release ofThe Last Broadcast, the first feature-length movie, shot, edited and distributed digitally.[5][6][7]In conjunction with Texas Instruments, the movie was publicly demonstrated in five theaters across the United States (Philadelphia,Portland (Oregon),Minneapolis,Providence, andOrlando). In the United States, on June 18, 1999, Texas Instruments'DLP Cinemaprojector technology was publicly demonstrated on two screens in Los Angeles and New York for the release of Lucasfilm'sStar Wars Episode I: The Phantom Menace.[8][9]In Europe, on February 2, 2000, Texas Instruments'DLP Cinemaprojector technology was publicly demonstrated, by Philippe Binant, on one screen in Paris for the release ofToy Story 2.[10][11] From 1997 to 2000, theJPEG 2000image compressionstandard was developed by aJoint Photographic Experts Group(JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president).[12]In contrast to the original 1992JPEGstandard, which is a DCT-basedlossy compressionformat for staticdigital images, JPEG 2000 is adiscrete wavelet transform(DWT) based compression standard that could be adapted for motion imagingvideo compressionwith theMotion JPEG 2000extension. JPEG 2000 technology was later selected as thevideo coding standardfor digital cinema in 2004.[13] On January 19, 2000, theSociety of Motion Picture and Television Engineers, in the United States, initiated the first standards group dedicated towards developing digital cinema.[14]By December 2000, there were 15 digital cinema screens in the United States and Canada, 11 in Western Europe, 4 in Asia, and 1 in South America.[15]Digital Cinema Initiatives(DCI) was formed in March 2002 as a joint project of many motion picture studios (Disney,Fox,MGM,Paramount,Sony Pictures,UniversalandWarner Bros.) to develop a system specification for digital cinema.[16]The same month it was reported that the number of cinemas equipped with digital projectors had increased to about 50 in the US and 30 more in the rest of the world.[17] In April 2004, in cooperation with theAmerican Society of Cinematographers, DCI created standard evaluation material (the ASC/DCI StEM material) for testing of 2K and 4K playback and compression technologies. DCI selectedJPEG 2000as the basis for the compression in the system the same year.[18]Initial tests with JPEG 2000 producedbit ratesof around 75–125Mbit/sfor2K resolutionand 100–200 Mbit/s for4K resolution.[13] In China, in June 2005, an e-cinema system called "dMs" was established and was used in over 15,000 screens spread across China's 30 provinces. dMs estimated that the system would expand to 40,000 screens in 2009.[19]In 2005 the UK Film Council Digital Screen Network launched in the UK by Arts Alliance Media creating a chain of 250 2K digital cinema systems. The roll-out was completed in 2006. This was the first mass roll-out in Europe. AccessIT/Christie Digital also started a roll-out in the United States and Canada. By mid 2006, about 400 theaters were equipped with 2K digital projectors with the number increasing every month. In August 2006, theMalayalamdigital movieMoonnamathoral, produced by Benzy Martin, was distributed via satellite to cinemas, thus becoming the first Indian digital cinema. This was done by Emil and Eric Digital Films, a company based at Thrissur using the end-to-end digital cinema system developed by Singapore-based DG2L Technologies.[20] In January 2007,Gurubecame the firstIndian filmmastered in the DCI-compliant JPEG 2000 Interop format and also the first Indian film to be previewed digitally, internationally, at the Elgin Winter Garden in Toronto. This film was digitally mastered at Real Image Media Technologies in India. In 2007, the UK became home to Europe's first DCI-compliant fully digital multiplex cinemas; Odeon Hatfield and Odeon Surrey Quays (in London), with a total of 18 digital screens, were launched on 9 February 2007. By March 2007, with the release of Disney'sMeet the Robinsons, about 600 screens had been equipped with digital projectors. In June 2007, Arts Alliance Media announced the first European commercial digital cinemaVirtual Print Fee(VPF) agreements (with20th Century FoxandUniversal Pictures). In March 2009AMC Theatresannounced that it closed a $315 million deal withSonyto replace all of itsmovie projectorswith 4K digital projectors starting in the second quarter of 2009; it was anticipated that this replacement would be finished by 2012.[21] As digital cinema technology improved in the early 2010s, most theaters across the world converted to digital video projection.[22]In January 2011, the total number of digital screens worldwide was 36,242, up from 16,339 at end 2009 or a growth rate of 121.8 percent during the year.[23]There were 10,083 d-screens in Europe as a whole (28.2 percent of global figure), 16,522 in the United States and Canada (46.2 percent of global figure) and 7,703 in Asia (21.6 percent of global figure). Worldwide progress was slower as in some territories, particularly Latin America and Africa.[24][25]As of 31 March 2015, 38,719 screens (out of a total of 39,789 screens) in the United States have been converted to digital, 3,007 screens in Canada have been converted, and 93,147 screens internationally have been converted.[26]By the end of 2017, virtually all of the world's cinema screens were digital (98%).[27]Digital cinema technology has continued to develop over the years with 3D, RPX, 4DX and ScreenX, allowing moviegoers with more immersive experiences.[28] Despite the fact that today, virtually all global movie theaters have converted their screens to digital cinemas, some major motion pictures even as of 2019 are shot on film.[29][30]For example,Quentin Tarantinoreleased his latest filmOnce Upon a Time in Hollywoodin 70 mm and 35 mm in selected theaters across the United States and Canada.[31] In addition to the equipment already found in a film-based movie theatre (e.g., asound reinforcement system, screen, etc.), a DCI-compliant digital cinema requires a DCI-compliant[32]digital projector and a powerful computer known as aserver. Movies are supplied to the theatre as a set of digital files called aDigital Cinema Package(DCP).[33]For a typical feature film, these files will be anywhere between 90 GB and 300 GB of data (roughly two to six times the information of a Blu-ray disc) and may arrive as a physical delivery on a conventional computer hard drive or via satellite or fibre-optic broadband Internet.[34]As of 2013, physical deliveries of hard drives were most common in the industry. Promotional trailers arrive on a separate hard drive and range between 200 GB and 400 GB in size. Ingest of DCP files may be done at each cabin’s projector-server or may be stored in a central server called aDigital Cinema Librarysharing its content over the cinema local area network (LAN) and managed by the Theater Management System (software). Regardless of how the DCP arrives, it first needs to be copied onto the internal hard drives of the server, either via an eSATA connection, or via a closed network, a process known as "ingesting."[citation needed]DCPs can be, and in the case of feature films almost always are, encrypted, to prevent illegal copying and piracy. The necessary decryption keys are supplied separately, usually as email attachments or via download, and then "ingested" via USB. Keys are time-limited and will expire after the end of the period for which the title has been booked. They are also locked to the hardware (server and projector) that is to screen the film, so if the theatre wishes to move the title to another screen or extend the run, a new key must be obtained from the distributor.[35]Several versions of the same feature can be sent together. The original version (OV) is used as the basis of all the other playback options. Version files (VF) may have a different sound format (e.g. 7.1 as opposed to5.1 surround sound) or subtitles. 2D and 3D versions are often distributed on the same hard drive. The playback of the content is controlled by the server using a "playlist". As the name implies, this is a list of all the content that is to be played as part of the performance. The playlist will be created by a member of the theatre's staff using proprietary software that runs on the server. In addition to listing the content to be played the playlist also includes automation cues that allow the playlist to control the projector, the sound system, auditorium lighting, tab curtains and screen masking (if present), etc. The playlist can be started manually, by clicking the "play" button on the server's monitor screen, or automatically at pre-set times.[36] TheTheater Manager System(TMS) is a central system software for the whole cinema house, handling the central cinema content library and preparing the playback sessions (playlists) with the correct KDM keys and the selected cinema content moved to each projector. Digital Cinema Initiatives(DCI), ajoint ventureof the sixmajor studios, published the first version (V1.0) of a system specification for digital cinema in July 2005.[16]The main declared objectives of the specification were to define a digital cinema system that would "present a theatrical experience that is better than what one could achieve now with a traditional 35mm Answer Print", to provide global standards for interoperability such that any DCI-compliant content could play on any DCI-compliant hardware anywhere in the world and to provide robust protection for the intellectual property of the content providers. The DCI specification calls for picture encoding using the ISO/IEC 15444-1 "JPEG2000" (.j2c) standard and use of theCIE XYZcolor space at 12 bits per component encoded with a 2.6gammaapplied at projection. Two levels of resolution for both content and projectors are supported: 2K (2048×1080) or 2.2 MP at 24 or 48frames per second, and 4K (4096×2160) or 8.85 MP at 24 frames per second. The specification ensures that 2K content can play on 4K projectors and vice versa. Smaller resolutions in one direction are also supported (the image gets automatically centered). Later versions of the standard added additional playback rates (like 25 fps in SMPTE mode). For the sound component of the content the specification provides for up to 16 channels of uncompressed audio using the"Broadcast Wave" (.wav)format at 24 bits and 48 kHz or 96 kHz sampling. Playback is controlled by anXML-format Composition Playlist, into anMXF-compliant file at a maximum data rate of 250 Mbit/s. Details about encryption,key management, and logging are all discussed in the specification as are the minimum specifications for the projectors employed including thecolor gamut, thecontrast ratioand the brightness of the image. While much of the specification codifies work that had already been ongoing in the Society of Motion Picture and Television Engineers (SMPTE), the specification is important in establishing a content owner framework for the distribution and security of first-release motion-picture content. In addition to DCI's work, theNational Association of Theatre Owners(NATO) released its Digital Cinema System Requirements.[37]The document addresses the requirements of digital cinema systems from the operational needs of the exhibitor, focusing on areas not addressed by DCI, including access for the visually impaired and hearing impaired, workflow inside the cinema, and equipment interoperability. In particular, NATO's document details requirements for the Theatre Management System (TMS), the governing software for digital cinema systems within a theatre complex, and provides direction for the development of security key management systems. As with DCI's document, NATO's document is also important to the SMPTE standards effort. The Society of Motion Picture and Television Engineers (SMPTE) began work on standards for digital cinema in 2000. It was clear by that point in time that HDTV did not provide a sufficient technological basis for the foundation of digital cinema playback. In Europe, India and Japan however, there is still a significant presence of HDTV for theatrical presentations. Agreements within the ISO standards body have led to these non-compliant systems being referred to as Electronic Cinema Systems (E-Cinema). Only four manufacturers make DCI-approved digital cinema projectors; these areBarco,Christie, Sharp/NECandSony. Except for Sony, who used to use their ownSXRDtechnology, all use theDigital light processing(DLP) technology developed byTexas Instruments(TI). D-Cinema projectors are similar in principle to digital projectors used in industry, education, and domestic home cinemas, but differ in two important respects. First, projectors must conform to the strict performance requirements of the DCI specification. Second, projectors must incorporate anti-piracy devices intended to enforce copyright compliance such as licensing limits. For these reasons all projectors intended to be sold to theaters for screening current release moviesmustbe approved by the DCI before being put on sale. They now pass through a process called CTP (compliance test plan). Because feature films in digital form are encrypted and the decryption keys (KDMs) are locked to the serial number of the server used (linking to both the projector serial number and server is planned in the future), a system will allow playback of a protected feature only with the required KDM. Three manufacturers have licensed the DLP Cinema technology developed byTexas Instruments(TI):Christie Digital Systems,Barco, andNEC. While NEC is a relative newcomer to Digital Cinema, Christie is the main player in the U.S. and Barco takes the lead in Europe and Asia.[citation needed]Initially DCI-compliant DLP projectors were available in 2K only, but from early 2012, when TI's 4K DLP chip went into full production, DLP projectors have been available in both 2K and 4K versions. Manufacturers of DLP-based cinema projectors can now also offer 4K upgrades to some of the more recent 2K models.[38]EarlyDLP Cinema projectors, which were deployed primarily in the United States, used limited 1280×1024 resolution or the equivalent of 1.3 MP (megapixels). Digital Projection Incorporated (DPI) designed and sold a few DLP Cinema units (is8-2K) when TI's 2K technology debuted but then abandoned the D-Cinema market while continuing to offer DLP-based projectors for non-cinema purposes. Although based on the same 2K TI "light engine" as those of the major players they are so rare as to be virtually unknown in the industry. They are still widely used for pre-show advertising but not usually for feature presentations. TI's technology is based on the use of digital micromirror devices (DMDs).[39]These areMEMSdevices that are manufactured from silicon using similar technology to that of computer chips. The surface of these devices is covered by a very large number of microscopic mirrors, one for each pixel, so a 2K device has about 2.2 million mirrors and a 4K device about 8.8 million. Each mirror vibrates several thousand times a second between two positions: In one, light from the projector's lamp is reflected towards the screen, in the other away from it. The proportion of the time the mirror is in each position varies according to the required brightness of each pixel. Three DMD devices are used, one for each of the primary colors. Light from the lamp, usually aXenon arc lampsimilar to those used in film projectors with a power between 1 kW and 7 kW, is split by colored filters into red, green and blue beams which are directed at the appropriate DMD. The 'forward' reflected beam from the three DMDs is then re-combined and focused by the lens onto the cinema screen. Later projectors may use lasers instead of xenon lamps. Alone amongst the manufacturers of DCI-compliant cinema projectors Sony decided to develop its own technology rather than use TI's DLP technology.SXRD(Silicon X-tal (Crystal) Reflective Display) projectors have only ever been manufactured in 4K form and, until the launch of the 4K DLP chip by TI, Sony SXRD projectors were the only 4K DCI-compatible projectors on the market. Unlike DLP projectors, however, SXRD projectors do not present the left and right eye images of stereoscopic movies sequentially, instead they use half the available area on the SXRD chip for each eye image. Thus during stereoscopic presentations the SXRD projector functions as a sub 2K projector, the same for HFR 3D Content.[40] However, Sony decided in late April 2020 that they would no longer manufacture digital cinema projectors.[41][42] In late 2005, interest in digital 3Dstereoscopicprojection led to a new willingness on the part of theaters to co-operate in installing 2K stereo installations to show Disney'sChicken Littlein3D film. Six more digital 3D movies were released in 2006 and 2007 (includingBeowulf,Monster HouseandMeet the Robinsons). The technology combines a single digital projector fitted with either a polarizing filter (for use withpolarized glassesand silver screens), a filter wheel or an emitter for LCD glasses.RealDuses a "ZScreen" for polarisation and MasterImage uses a filter wheel that changes the polarity of projector's light output several times per second to alternate quickly the left-and-right-eye views. Another system that uses a filter wheel isDolby 3D. The wheel changes the wavelengths of the colours being displayed, and tinted glasses filter these changes so the incorrect wavelength cannot enter the wrong eye.XpanDmakes use of an external emitter that sends a signal to the 3D glasses to block out the wrong image from the wrong eye. RGB laser projection produces the purestBT.2020 colorsand the brightest images.[43] In Asia, on July 13, 2017, an LED screen for digital cinema developed bySamsung Electronicswas publicly demonstrated on one screen atLotte CinemaWorld Tower inSeoul.[44]The first installation in Europe is in ArenaSihlcityCinema in Zürich.[45]These displays do not use a projector; instead they use aLEDvideo wall, and can offer higher contrast ratios, higher resolutions, and overall improvements in image quality. Sony already sells MicroLED displays as a replacement for conventional cinema screens.[46] Digital distributionof movies has the potential to save money for film distributors. Making thousands of prints for a wide-release movie can be expensive. In contrast, at the maximum 250 megabit-per-second data rate (as defined byDCIfor digital cinema), a feature-length movie can be stored on anoff-the-shelf300GBhard drive for $50 and a broad release of 4000 'digital prints' might cost $200,000.[citation needed]In addition hard drives can be returned to distributors for reuse. With several hundred movies distributed every year, the industry saves billions of dollars. The digital-cinema roll-out was stalled by the slow pace at which exhibitors acquired digital projectors, since the savings would be seen not by themselves but by distribution companies. TheVirtual Print Feemodel was created to address this by passing some of the saving on to the cinemas.[citation needed]As a consequence of the rapid conversion to digital projection, the number of theatrical releases exhibited on film is dwindling. As of 4 May 2014, 37,711 screens (out of a total of 40,048 screens) in the United States have been converted to digital, 3,013 screens in Canada have been converted, and 79,043 screens internationally have been converted.[26] Realization and demonstration, on October 29, 2001, of the first digital cinema transmission bysatellitein Europe[47][48][49]of afeature filmby Bernard Pauchon,[50]Alain Lorentz, Raymond Melwig[51]and Philippe Binant.[52][53] Then, reliable file delivery of DCPs via Internet emerged, thanks to higher bandwidth connections in cinemas, first with bonded DSL lines, then with fiber connection to the Internet. Digital cinemas can deliver livebroadcasts(mainly via satellite digital television) or broadband Internet (streaming media) from performances or events. This began initially with live broadcasts from the New York Metropolitan Opera delivering regular live broadcasts into cinemas and has been widely imitated ever since. Leading territories providing the content are the UK, the US, France and Germany. The Royal Opera House, Sydney Opera House, English National Opera and others have found new and returning audiences captivated by the detail offered by a live digital broadcast featuring handheld and cameras on cranes positioned throughout the venue to capture the emotion that might be missed in a live venue situation. In addition these providers all offer additional value during the intervals e.g. interviews with choreographers, cast members, a backstage tour which would not be on offer at the live event itself. Other live events in this field include live theatre from NT Live, Branagh Live, Royal Shakespeare Company, Shakespeare's Globe, the Royal Ballet, Mariinsky Ballet, the Bolshoi Ballet and the Berlin Philharmoniker. In the last ten years this initial offering of the arts has also expanded to include live and recorded music events such as Take That Live, One Direction Live, Andre Rieu, live musicals such as the recent Miss Saigon and a record-breaking Billy Elliot Live In Cinemas. Live sport, documentary with a live question and answer element such as the recent Oasis documentary, lectures, faith broadcasts, stand-up comedy, museum and gallery exhibitions, TV specials such as the record-breakingDoctor Whofiftieth anniversary specialThe Day Of The Doctor, have all contributed to creating a valuable revenue stream for cinemas large and small all over the world. Subsequently, live broadcasting, formerly known as Alternative Content, has become known as Event Cinema and a trade association now exists to that end. Ten years on the sector has become a sizeable revenue stream in its own right, earning a loyal following amongst fans of the arts, and the content limited only by the imagination of the producers it would seem. Theatre, ballet, sport, exhibitions, TV specials and documentaries are now established forms of Event Cinema. Worldwide estimations put the likely value of the Event Cinema industry at $1bn by 2019.[54] Event Cinema currently accounts for on average between 1-3% of overall box office for cinemas worldwide but anecdotally it's been reported that some cinemas attribute as much as 25%, 48% and even 51% (the Rio Bio cinema in Stockholm) of their overall box office. It is envisaged ultimately that Event Cinema will account for around 5% of the overall box office globally. Event Cinema saw six worldwide records set and broken over from 2013 to 2015 with notable successes Dr Who ($10.2m in three days at the box office – event was also broadcast on terrestrial TV simultaneously), Pompeii Live by the British Museum, Billy Elliot, Andre Rieu, One Direction, Richard III by the Royal Shakespeare Company. Event Cinema is defined more by the frequency of events rather than by the content itself. Event Cinema events typically appear in cinemas during traditionally quieter times in the cinema week such as the Monday-Thursday daytime/evening slot and are characterised by the One Night Only release, followed by one or possibly more 'Encore' releases a few days or weeks later if the event is successful and sold out. On occasion more successful events have returned to cinemas some months or even years later in the case of NT Live where the audience loyalty and company branding is so strong the content owner can be assured of a good showing at the box office. The digital formation of sets and locations, especially in the time of growing film series and sequels, is that virtual sets, once computer generated and stored, can be easily revived for future films.[55]: 62Considering digital film images are documented as data files on hard disk or flash memory, varying systems of edits can be executed with the alteration of a few settings on the editing console with the structure being composed virtually in the computer's memory. A broad choice of effects can be sampled simply and rapidly, without the physical constraints posed by traditional cut-and-stick editing.[55]: 63Digital cinema allows national cinemas to construct films specific to their cultures in ways that the more constricting configurations and economics of customary film-making prevented. Low-cost cameras and computer-based editing software have gradually enabled films to be produced for minimal cost. The ability of digital cameras to allow film-makers to shoot limitless footage without wasting costly film has transformed film production in some Third World countries.[55]: 66From consumers' perspective digital prints do not deteriorate with the number of showings. Unlike film, there is no projection mechanism or manual handling to add scratches or other physically generated artefacts. Provincial cinemas that would have received old prints can give consumers the same cinematographic experience (all other things being equal) as those attending the premiere. The use of NLEs in movies allows for edits and cuts to be made non-destructively, without actually discarding any footage. A number of high-profile film directors, includingChristopher Nolan,[56]Paul Thomas Anderson,[57]David O. Russell[58]andQuentin Tarantino[58]have publicly criticized digital cinema and advocated the use of film and film prints. Most famously, Tarantino has suggested he may retire because, though he can still shoot on film, because of the rapid conversion to digital, he cannot project from 35 mm prints in the majority of American cinemas.[59]Steven Spielberghas stated that though digital projection produces a much better image than film if originally shot in digital, it is "inferior" when it has been converted to digital. He attempted at one stage to releaseIndiana Jones and the Kingdom of the Crystal Skullsolely on film.[60]Paul Thomas Anderson recently was able to create70-mm filmprints for his filmThe Master.[citation needed] Film criticRoger Ebertcriticized the use of DCPs after a cancelled film festival screening ofBrian DePalma's filmPassionatNew York Film Festivalas a result of a lockup due to the coding system.[61] The theoretical resolution of 35 mm film is greater than that of 2K digital cinema.[62][63]2K resolution (2048×1080) is also only slightly greater than that of consumer based1080p HD(1920x1080).[64]However, since digital post-production techniques became the standard in the early 2000s, the majority of movies, whether photographed digitally or on 35 mm film, have been mastered and edited at the 2K resolution. Moreover, 4K post production was becoming more common as of 2013. As projectors are replaced with 4K models[65]the difference in resolution between digital and 35 mm film is somewhat reduced.[66]Digital cinema servers utilize far greater bandwidth over domestic "HD", allowing for a difference in quality (e.g., Blu-ray colour encoding 4:2:0 48 Mbit/s MAX datarate, DCI D-Cinema 4:4:4 250 Mbit/s 2D/3D, 500 Mbit/s HFR3D). Each frame has greater detail. Owing to the smaller dynamic range of digital cameras, correcting poor digital exposures is more difficult than correcting poor film exposures during post-production. A partial solution to this problem is to add complex video-assist technology during the shooting process. However, such technologies are typically available only to high-budget production companies.[55]: 62Digital cinemas' efficiency of storing images has a downside. The speed and ease of modern digital editing processes threatens to give editors and their directors, if not an embarrassment of choice then at least a confusion of options, potentially making the editing process, with this 'try it and see' philosophy, lengthier rather than shorter.[55]: 63Because the equipment needed to produce digital feature films can be obtained more easily than film projectors, producers could inundate the market with cheap productions and potentially dominate the efforts of serious directors. Because of the quick speed in which they are filmed, these stories sometimes lack essential narrative structure.[55]: 66–67 The electronic transferring of digital film, from central servers to servers in cinema projection booths, is an inexpensive process of supplying copies of newest releases to the vast number of cinema screens demanded by prevailing saturation-release strategies. There is a significant saving on print expenses in such cases: at a minimum cost per print of $1200–2000, the cost of film print production is between $5–8 million per movie. With several thousand releases a year, the probable savings offered by digital distribution and projection are over $1 billion.[55]: 67The cost savings and ease, together with the ability to store film rather than having to send a print on to the next cinema, allows a larger scope of films to be screened and watched by the public; minority and small-budget films that would not otherwise get such a chance.[55]: 67 The initial costs for converting theaters to digital are high: $100,000 per screen, on average. Theaters have been reluctant to switch without a cost-sharing arrangement withfilm distributors. A solution is a temporaryVirtual Print Feesystem, where the distributor (who saves the money of producing and transporting a film print) pays a fee per copy to help finance the digital systems of the theaters.[67]A theater can purchase a film projector for as little as $10,000[68](though projectors intended for commercial cinemas cost two to three times that; to which must be added the cost of along-play system, which also costs around $10,000, making a total of around $30,000–$40,000) from which they could expect an average life of 30–40 years. By contrast, a digital cinema playback system—including server,media block, and projector—can cost two to three times as much,[69]and would have a greater risk of component failure and obsolescence. (In Britain the cost of an entry-level projector including server, installation, etc., would be £31,000 [$50,000].) Archivingdigital mastershas also turned out to be both tricky and costly. In a 2007 study, theAcademy of Motion Picture Arts and Sciencesfound the cost of long-term storage of 4K digital masters to be "enormously higher - 1100% higher - that of the cost of storing film masters."[70]: 43This is because of the limited or uncertain lifespan of digital storage: No current digital medium—be itoptical disc, magnetichard driveor digital tape—can reliably store a motion picture for as long as a hundred years or more (a timeframe for film properly stored).[70]: 35The short history of digital storage media has been one of innovation and, therefore, of obsolescence. Archived digital content must be periodically removed from obsolete physical media to up-to-date media.[70]: 36The expense ofdigital image captureis not necessarily less than the capture of images onto film; indeed, it is sometimes greater.[citation needed]
https://en.wikipedia.org/wiki/Digital_cinema
Anetwork on a chipornetwork-on-chip(NoC/ˌɛnˌoʊˈsiː/en-oh-SEEor/nɒk/knock)[nb 1]is anetwork-basedcommunications subsystemon anintegrated circuit("microchip"), most typically betweenmodulesin asystem on a chip(SoC). The modules on the IC are typically semiconductorIP coresschematizing various functions of thecomputer system, and are designed to bemodularin the sense ofnetwork science. The network on chip is arouter-basedpacket switchingnetwork between SoCmodules. NoC technology applies the theory and methods ofcomputer networkingto on-chipcommunicationand brings notable improvements over conventionalbusandcrossbarcommunication architectures. Networks-on-chip come in manynetwork topologies, many of which are still experimental as of 2018.[citation needed] In 2000s, researchers had started to propose a type of on-chip interconnection in the form ofpacket switchingnetworks[1]in order to address the scalability issues ofbus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires.[2]Then, the concept of "network on chips" was proposed in 2002.[3]NoCs improve thescalabilityof systems-on-chip and thepower efficiencyof complex SoCs compared to other communication subsystem designs. They are anemerging technology, with projections for large growth in the near future asmulticorecomputer architectures become more common. NoCs can span synchronous and asynchronous clock domains, known asclock domain crossing, or use unclockedasynchronouslogic. NoCs supportglobally asynchronous, locally synchronouselectronics architectures, allowing eachprocessor coreor functional unit on the System-on-Chip to have its ownclock domain.[4] NoC architectures typically modelsparsesmall-world networks(SWNs) andscale-free networks(SFNs) to limit the number, length, area andpower consumptionof interconnection wires andpoint-to-pointconnections. The topology determines the physical layout and connections between nodes and channels. The message traverses hops, and each hop's channel length depends on the topology. The topology significantly influences bothlatencyand power consumption. Furthermore, since the topology determines the number of alternative paths between nodes, it affects the network traffic distribution, and hence thenetwork bandwidthand performance achieved.[5] Traditionally, ICs have been designed with dedicatedpoint-to-pointconnections, with one wire dedicated to each signal. This results in adense network topology. For large designs, in particular, this has several limitations from aphysical designviewpoint. It requirespowerquadraticin the number of interconnections. The wires occupy much of thearea of the chip, and innanometerCMOStechnology, interconnects dominate both performance and dynamicpower dissipation, as signal propagation in wires across the chip requires multipleclock cycles. This also allows moreparasitic capacitance,resistance and inductanceto accrue on the circuit. (SeeRent's rulefor a discussion of wiring requirements for point-to-point connections). Sparsityandlocalityof interconnections in the communications subsystem yield several improvements over traditionalbus-based andcrossbar-based systems. The wires in the links of the network-on-chip are shared by manysignals. A high level ofparallelismis achieved, because alldata linksin the NoC can operate simultaneously on differentdata packets.[why?]Therefore, as the complexity ofintegrated systemskeeps growing, a NoC provides enhanced performance (such asthroughput) andscalabilityin comparison with previous communication architectures (e.g., dedicated point-to-point signalwires, sharedbuses, or segmented buses withbridges). Thealgorithms[which?]must be designed in such a way that they offerlarge parallelismand can hence utilize the potential of NoC. Some researchers[who?]think that NoCs need to supportquality of service(QoS), namely achieve the various requirements in terms ofthroughput, end-to-end delays,fairness,[6]anddeadlines.[citation needed]Real-time computation, including audio and video playback, is one reason for providing QoS support. However, current system implementations likeVxWorks,RTLinuxorQNXare able to achieve sub-millisecond real-time computing without special hardware.[citation needed] This may indicate that for manyreal-timeapplications the service quality of existing on-chip interconnect infrastructure is sufficient, and dedicatedhardware logicwould be necessary to achieve microsecond precision, a degree that is rarely needed in practice for end users (sound or video jitter need only tenth of milliseconds latency guarantee). Another motivation for NoC-levelquality of service(QoS) is to support multiple concurrent users sharing resources of a singlechip multiprocessorin a publiccloud computinginfrastructure. In such instances, hardware QoS logic enables the service provider to makecontractual guaranteeson the level of service that a user receives, a feature that may be deemed desirable by some corporate or government clients.[citation needed] Many challenging research problems remain to be solved at all levels, from the physical link level through the network level, and all the way up to the system architecture and application software. The first dedicated research symposium on networks on chip was held atPrinceton University, in May 2007.[7]The secondIEEEInternational Symposium on Networks-on-Chip was held in April 2008 atNewcastle University. Research has been conducted on integratedoptical waveguidesand devices comprising an optical network on a chip (ONoC).[8][9] The possible way to increasing the performance of NoC is use wireless communication channels betweenchiplets— named wireless network on chip (WiNoC).[10] In a multi-core system, connected by NoC, coherency messages and cache miss requests have to pass switches. Accordingly, switches can be augmented with simple tracking and forwarding elements to detect which cache blocks will be requested in the future by which cores. Then, the forwarding elements multicast any requested block to all the cores that may request the block in the future. This mechanism reduces cache miss rate.[11] NoC development and studies require comparing different proposals and options. NoC traffic patterns are under development to help such evaluations. Existing NoC benchmarks include NoCBench and MCSL NoC Traffic Patterns.[12] An interconnect processing unit (IPU)[13]is an on-chip communication network withhardwareandsoftwarecomponents which jointly implement key functions of differentsystem-on-chipprogramming models through a set of communication andsynchronization primitivesand providelow-levelplatform services to enable advanced features[which?]in modern heterogeneous applications[definition needed]on a singledie. Adapted fromAvinoam Kolodny's's column in the ACMSIGDAe-newsletterbyIgor MarkovThe original text can be found athttp://www.sigda.org/newsletter/2006/060415.txt
https://en.wikipedia.org/wiki/Network_on_a_chip
Instatisticsand related fields, asimilarity measureorsimilarity functionorsimilarity metricis areal-valued functionthat quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse ofdistance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms. Cosine similarityis a commonly used similarity measure for real-valued vectors, used in (among other fields)information retrievalto score the similarity of documents in thevector space model. Inmachine learning, commonkernel functionssuch as theRBF kernelcan be viewed as similarity functions.[1] Different types of similarity measures exist for various types of objects, depending on the objects being compared. For each type of object there are various similarity measurement formulas.[2] Similarity between two data points There are many various options available when it comes to finding similarity between two data points, some of which are a combination of other similarity methods. Some of the methods for similarity measures between two data points include Euclidean distance, Manhattan distance, Minkowski distance, and Chebyshev distance. The Euclidean distance formula is used to find the distance between two points on a plane, which is visualized in the image below. Manhattan distance is commonly used inGPSapplications, as it can be used to find the shortest route between two addresses.[citation needed]When you generalize the Euclidean distance formula and Manhattan distance formula you are left with theMinkowski distanceformulas, which can be used in a wide variety of applications. Similarity between strings For comparing strings, there are various measures ofstring similaritythat can be used. Some of these methods include edit distance, Levenshtein distance, Hamming distance, and Jaro distance. The best-fit formula is dependent on the requirements of the application. For example, edit distance is frequently used fornatural language processingapplications and features, such as spell-checking. Jaro distance is commonly used in record linkage to compare first and last names to other sources. Similarity between two probability distributions Typical measures of similarity forprobability distributionsare theBhattacharyya distanceand theHellinger distance. Both provide a quantification of similarity for two probability distributions on the same domain, and they are mathematically closely linked. The Bhattacharyya distance does not fulfill thetriangle inequality, meaning it does not form ametric. The Hellinger distance does form a metric on the space of probability distributions. Similarity between two sets TheJaccard indexformula measures the similarity between twosetsbased on the number of items that are present in both sets relative to the total number of items. It is commonly used inrecommendation systemsandsocial media analysis[citation needed]. TheSørensen–Dice coefficientalso compares the number of items in both sets to the total number of items present but the weight for the number of shared items is larger. The Sørensen–Dice coefficient is commonly used inbiologyapplications, measuring the similarity between two sets of genes or species[citation needed]. Similarity between two sequences When comparing temporal sequences (time series), some similarity measures must additionally account for similarity of two sequences that are not fully aligned. Clustering orCluster analysisis a data mining technique that is used to discover patterns in data by grouping similar objects together. It involves partitioning a set of data points into groups or clusters based on their similarities. One of the fundamental aspects of clustering is how to measure similarity between data points. Similarity measures play a crucial role in many clustering techniques, as they are used to determine how closely related two data points are and whether they should be grouped together in the same cluster. A similarity measure can take many different forms depending on the type of data being clustered and the specific problem being solved. One of the most commonly used similarity measures is theEuclidean distance, which is used in many clustering techniques includingK-means clusteringandHierarchical clustering. The Euclidean distance is a measure of the straight-line distance between two points in a high-dimensional space. It is calculated as the square root of the sum of the squared differences between the corresponding coordinates of the two points. For example, if we have two data points(x1,y1){\displaystyle (x_{1},y_{1})}and(x2,y2){\displaystyle (x_{2},y_{2})}, the Euclidean distance between them isd=√[(x2−x1)2+(y2−y1)2]{\displaystyle d=\surd [(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}]}. Another commonly used similarity measure is theJaccard indexor Jaccard similarity, which is used in clustering techniques that work with binary data such as presence/absence data[3]or Boolean data; The Jaccard similarity is particularly useful for clustering techniques that work with text data, where it can be used to identify clusters of similar documents based on their shared features or keywords.[4]It is calculated as the size of the intersection of two sets divided by the size of the union of the two sets:J(A,B)=A⋂BA⋃B{\displaystyle J(A,B)={A\bigcap B \over A\bigcup B}}. Similarities among 162 Relevant Nuclear Profile are tested using the Jaccard Similarity measure (see figure with heatmap). The Jaccard similarity of the nuclear profile ranges from 0 to 1, with 0 indicating no similarity between the two sets and 1 indicating perfect similarity with the aim of clustering the most similar nuclear profile. Manhattan distance, also known asTaxicab geometry, is a commonly used similarity measure in clustering techniques that work with continuous data. It is a measure of the distance between two data points in a high-dimensional space, calculated as the sum of the absolute differences between the corresponding coordinates of the two points|x1−x2|+|y1−y2|{\displaystyle \left\vert x_{1}-x_{2}\right\vert +\left\vert y_{1}-y_{2}\right\vert }. When dealing with mixed-type data, including nominal, ordinal, and numerical attributes per object,Gower's distance(or similarity) is a common choice as it can handle different types of variables implicitly. It first computes similarities between the pair of variables in each object, and then combines those similarities to a single weighted average per object-pair. As such, for two objectsi{\displaystyle i}andj{\displaystyle j}havingp{\displaystyle p}descriptors, the similarityS{\displaystyle S}is defined as:Sij=∑k=1pwijksijk∑k=1pwijk,{\displaystyle S_{ij}={\frac {\sum _{k=1}^{p}w_{ijk}s_{ijk}}{\sum _{k=1}^{p}w_{ijk}}},}where thewijk{\displaystyle w_{ijk}}are non-negative weights andsijk{\displaystyle s_{ijk}}is the similarity between the two objects regarding theirk{\displaystyle k}-th variable. Inspectral clustering, a similarity, or affinity, measure is used to transform data to overcome difficulties related to lack of convexity in the shape of the data distribution.[5]The measure gives rise to an(n,n){\displaystyle (n,n)}-sizedsimilarity matrixfor a set ofnpoints, where the entry(i,j){\displaystyle (i,j)}in the matrix can be simply the (reciprocal of the)Euclidean distancebetweeni{\displaystyle i}andj{\displaystyle j}, or it can be a more complex measure of distance such as the Gaussiane−‖s1−s2‖2/2σ2{\displaystyle e^{-\|s_{1}-s_{2}\|^{2}/2\sigma ^{2}}}.[5]Further modifying this result with network analysis techniques is also common.[6] The choice of similarity measure depends on the type of data being clustered and the specific problem being solved. For example, working with continuous data such as gene expression data, the Euclidean distance or cosine similarity may be appropriate. If working with binary data such as the presence of a genomic loci in a nuclear profile, the Jaccard index may be more appropriate. Lastly, working with data that is arranged in a grid or lattice structure, such as image or signal processing data, the Manhattan distance is particularly useful for the clustering. Similarity measures are used to developrecommender systems. It observes a user's perception and liking of multiple items. On recommender systems, the method is using a distance calculation such asEuclidean DistanceorCosine Similarityto generate asimilarity matrixwith values representing the similarity of any pair of targets. Then, by analyzing and comparing the values in the matrix, it is possible to match two targets to a user's preference or link users based on their marks. In this system, it is relevant to observe the value itself and the absolute distance between two values.[7]Gathering this data can indicate a mark's likeliness to a user as well as how mutually closely two marks are either rejected or accepted. It is possible then to recommend to a user targets with high similarity to the user's likes. Recommender systems are observed in multiple online entertainment platforms, in social media and streaming websites. The logic for the construction of this systems is based on similarity measures.[citation needed] Similarity matrices are used insequence alignment. Higher scores are given to more-similar characters, and lower or negative scores for dissimilar characters. Nucleotidesimilarity matrices are used to alignnucleic acidsequences. Because there are only four nucleotides commonly found inDNA(Adenine(A),Cytosine(C),Guanine(G) andThymine(T)), nucleotide similarity matrices are much simpler thanproteinsimilarity matrices. For example, a simple matrix will assign identical bases a score of +1 and non-identical bases a score of −1. A more complicated matrix would give a higher score to transitions (changes from apyrimidinesuch as C or T to another pyrimidine, or from apurinesuch as A or G to another purine) than to transversions (from a pyrimidine to a purine or vice versa). The match/mismatch ratio of the matrix sets the target evolutionary distance.[8][9]The +1/−3 DNA matrix used by BLASTN is best suited for finding matches between sequences that are 99% identical; a +1/−1 (or +4/−4) matrix is much more suited to sequences with about 70% similarity. Matrices for lower similarity sequences require longer sequence alignments. Amino acidsimilarity matrices are more complicated, because there are 20 amino acids coded for by thegenetic code, and so a larger number of possible substitutions. Therefore, the similarity matrix for amino acids contains 400 entries (although it is usuallysymmetric). The first approach scored all amino acid changes equally. A later refinement was to determine amino acid similarities based on how many base changes were required to change a codon to code for that amino acid. This model is better, but it doesn't take into account the selective pressure of amino acid changes. Better models took into account the chemical properties of amino acids. One approach has been to empirically generate the similarity matrices. TheDayhoffmethod used phylogenetic trees and sequences taken from species on the tree. This approach has given rise to thePAMseries of matrices. PAM matrices are labelled based on how many nucleotide changes have occurred, per 100 amino acids. While the PAM matrices benefit from having a well understood evolutionary model, they are most useful at short evolutionary distances (PAM10–PAM120). At long evolutionary distances, for example PAM250 or 20% identity, it has been shown that theBLOSUMmatrices are much more effective. The BLOSUM series were generated by comparing a number of divergent sequences. The BLOSUM series are labeled based on how much entropy remains unmutated between all sequences, so a lower BLOSUM number corresponds to a higher PAM number.
https://en.wikipedia.org/wiki/Similarity_measure
Asemantic reasoner,reasoning engine,rules engine, or simply areasoner, is a piece of software able to inferlogical consequencesfrom a set of asserted facts oraxioms. The notion of a semantic reasoner generalizes that of aninference engine, by providing a richer set of mechanisms to work with. Theinference rulesare commonly specified by means of anontology language, and often adescription logiclanguage. Many reasoners usefirst-order predicate logicto perform reasoning;inferencecommonly proceeds byforward chainingandbackward chaining. There are also examples of probabilistic reasoners, includingnon-axiomatic reasoning systems,[1]andprobabilistic logic networks.[2] Notable semantic reasoners and related software: S-LOR (Sensor-based Linked Open Rules) semantic reasonerS-LOR is under GNU GPLv3 license. S-LOR (Sensor-based Linked Open Rules) is a rule-based reasoning engine and an approach for sharing and reusing interoperable rules to deduce meaningful knowledge from sensor measurements.
https://en.wikipedia.org/wiki/Semantic_reasoner
Compiler Description Language (CDL)is aprogramming languagebased onaffix grammars. It is very similar toBackus–Naur form(BNF) notation. It was designed for the development ofcompilers. It is very limited in its capabilities and control flow, and intentionally so. The benefits of these limitations are twofold. On the one hand, they make possible the sophisticated data and control flow analysis used by the CDL2 optimizers resulting in extremely efficient code. The other benefit is that they foster a highly verbose naming convention. This, in turn, leads to programs that are, to a great extent,self-documenting. The language looks a bit likeProlog(this is not surprising since both languages arose at about the same time out of work onaffix grammars). However, as opposed to Prolog, control flow in CDL is deterministically based on success/failure, i.e., no other alternatives are tried when the current one succeeds. This idea is also used inparsing expression grammars. CDL3 is the third version of the CDL language, significantly different from the previous two versions. The original version, designed byCornelis H. A. Kosterat theUniversity of Nijmegen, which emerged in 1971, had a rather unusual concept: it had no core. A typical programming language source is translated to machine instructions or canned sequences of those instructions. Those represent the core, the most basicabstractionsthat the given language supports. Such primitives can be the additions of numbers, copying variables to each other, and so on. CDL1 lacks such a core. It is the responsibility of the programmer to provide the primitive operations in a form that can then be turned into machine instructions by means of an assembler or a compiler for a traditional language. The CDL1 language itself has no concept of primitives, no concept of data types apart from the machine word (an abstract unit of storage - not necessarily a real machine word as such). The evaluation rules are rather similar to theBackus–Naur formsyntax descriptions; in fact, writing a parser for a language described in BNF is rather simple in CDL1. Basically, the language consists of rules. A rule can either succeed or fail. A rule consists of alternatives that are sequences of other rule invocations. A rule succeeds if any of its alternatives succeed; these are tried in sequence. An alternative succeeds if all of its rule invocations succeed. The language provides operators to create evaluation loops without recursion (although this is not strictly necessary in CDL2 as the optimizer achieves the same effect) and some shortcuts to increase the efficiency of the otherwise recursive evaluation, but the basic concept is as above. Apart from the obvious application in context-free grammar parsing, CDL is also well suited to control applications since a lot of control applications are essentially deeply nested if-then rules. Each CDL1 rule, while being evaluated, can act on data, which is of unspecified type. Ideally, the data should not be changed unless the rule is successful (no side effects on failure). This causes problems as although this rule may succeed, the rule invoking it might still fail, in which case the data change should not take effect. It is fairly easy (albeit memory intensive) to assure the above behavior if all the data is dynamically allocated on a stack. However, it is rather hard when there's static data, which is often the case. The CDL2 compiler is able to flag the possible violations thanks to the requirement that the direction of parameters (input, output, input-output) and the type of rules (can fail:test,predicate; cannot fail:function,action; can have a side effect:predicate,action; cannot have a side effect:test,function) must be specified by the programmer. As the rule evaluation is based on calling simpler and simpler rules, at the bottom, there should be some primitive rules that do the actual work. That is where CDL1 is very surprising: it does not have those primitives. You have to provide those rules yourself. If you need addition in your program, you have to create a rule with two input parameters and one output parameter, and the output is set to be the sum of the two inputs by your code. The CDL compiler uses your code as strings (there are conventions on how to refer to the input and output variables) and simply emits it as needed. If you describe your adding rule using assembly, you will need an assembler to translate the CDL compiler's output into the machine code. If you describe all the primitive rules (macros in CDL terminology) in Pascal or C, then you need a Pascal or C compiler to run after the CDL compiler. This lack of core primitives can be very painful when you have to write a snippet of code, even for the simplest machine instruction operation. However, on the other hand, it gives you great flexibility in implementing esoteric, abstract primitives acting on exoticabstract objects(the 'machine word' in CDL is more like 'unit of data storage, with no reference to the kind of data stored there). Additionally, large projects made use of carefully crafted libraries of primitives. These were then replicated for each target architecture and OS allowing the production of highly efficient code for all. To get a feel for the language, here is a small code fragment adapted from the CDL2 manual: The primitive operations are here defined in terms of Java (or C). This is not a complete program; we must define the Java arrayitemselsewhere. CDL2, which appeared in 1976, kept the principles of CDL1 but made the language suitable for large projects. It introduced modules, enforced data-change-only-on-success, and extended the capabilities of the language somewhat. The optimizers in the CDL2 compiler and especially in the CDL2 Laboratory (an IDE for CDL2) were world-class and not just for their time. One feature of the CDL2 Laboratory optimizer is almost unique: it can perform optimizations across compilation units, i.e., treating the entire program as a single compilation. CDL3 is a more recent language. It gave up the open-ended feature of the previous CDL versions, and it provides primitives to basic arithmetic and storage access. The extremely puritan syntax of the earlier CDL versions (the number of keywords and symbols both run in single digits) has also been relaxed. Some basic concepts are now expressed in syntax rather than explicit semantics. In addition, data types have been introduced to the language. The commercial mbp Cobol (a Cobol compiler for the PC) as well as the MProlog system (an industrial-strength Prolog implementation that ran on numerous architectures (IBM mainframe, VAX, PDP-11, Intel 8086, etc.) and OS-s (DOS/OS/CMS/BS2000, VMS/Unix, DOS/Windows/OS2)). The latter, in particular, is testimony to CDL2's portability. While most programs written with CDL have been compilers, there is at least one commercial GUI application that was developed and maintained in CDL. This application was a dental image acquisition application now owned by DEXIS. A dental office management system was also once developed in CDL. The software for theMephisto III chess computerwas written with CDL2.[1]
https://en.wikipedia.org/wiki/Compiler_Description_Language
Incomputer scienceandinformation theory, acanonical Huffman codeis a particular type ofHuffman codewith unique properties which allow it to be described in a very compact manner. Rather than storing the structure of the code tree explicitly, canonical Huffman codes are ordered in such a way that it suffices to only store the lengths of the codewords, which reduces the overhead of the codebook. Data compressorsgenerally work in one of two ways. Either the decompressor can infer whatcodebookthe compressor has used from previous context, or the compressor must tell the decompressor what the codebook is. Since a canonical Huffman codebook can be stored especially efficiently, most compressors start by generating a "normal" Huffman codebook, and then convert it to canonical Huffman before using it. In order for asymbol codescheme such as theHuffman codeto be decompressed, the same model that the encoding algorithm used to compress the source data must be provided to the decoding algorithm so that it can use it to decompress the encoded data. In standard Huffman coding this model takes the form of a tree of variable-length codes, with the most frequent symbols located at the top of the structure and being represented by the fewest bits. However, this code tree introduces two critical inefficiencies into an implementation of the coding scheme. Firstly, each node of the tree must store either references to its child nodes or the symbol that it represents. This is expensive in memory usage and if there is a high proportion of unique symbols in the source data then the size of the code tree can account for a significant amount of the overall encoded data. Secondly, traversing the tree is computationally costly, since it requires the algorithm to jump randomly through the structure in memory as each bit in the encoded data is read in. Canonical Huffman codes address these two issues by generating the codes in a clear standardized format; all the codes for a given length are assigned their values sequentially. This means that instead of storing the structure of the code tree for decompression only the lengths of the codes are required, reducing the size of the encoded data. Additionally, because the codes are sequential, the decoding algorithm can be dramatically simplified so that it is computationally efficient. The normal Huffman codingalgorithmassigns a variable length code to every symbol in the alphabet. More frequently used symbols will be assigned a shorter code. For example, suppose we have the followingnon-canonical codebook: Here the letter A has been assigned 2bits, B has 1 bit, and C and D both have 3 bits. To make the code acanonicalHuffman code, the codes are renumbered. The bit lengths stay the same with the code book being sortedfirstby codeword length andsecondlybyalphabeticalvalueof the letter: Each of the existing codes are replaced with a new one of the same length, using the following algorithm: By following these three rules, thecanonicalversion of the code book produced will be: Another perspective on the canonical codewords is that they are the digits past theradix point(binary point) in a binary representation of a certain series. Specifically, suppose the lengths of the codewords arel1...ln. Then the canonical codeword for symboliis the firstlibinary digits past the radix point in the binary representation of ∑j=1i−12−lj.{\displaystyle \sum _{j=1}^{i-1}2^{-l_{j}}.} This perspective is particularly useful in light ofKraft's inequality, which says that the sum above will always be less than or equal to 1 (since the lengths come from a prefix free code). This shows that adding one in the algorithm above never overflows and creates a codeword that is longer than intended. The advantage of a canonical Huffman tree is that it can be encoded in fewer bits than an arbitrary tree. Let us take our original Huffman codebook: There are several ways we could encode this Huffman tree. For example, we could write eachsymbolfollowed by thenumber of bitsandcode: Since we are listing the symbols in sequential alphabetical order, we can omit the symbols themselves, listing just thenumber of bitsandcode: With ourcanonicalversion we have the knowledge that the symbols are in sequential alphabetical orderandthat a later code will always be higher in value than an earlier one. The only parts left to transmit are thebit-lengths(number of bits) for each symbol. Note that our canonical Huffman tree always has higher values for longer bit lengths and that any symbols of the same bit length (CandD) have higher code values for higher symbols: Since two-thirds of the constraints are known, only thenumber of bitsfor each symbol need be transmitted: With knowledge of the canonical Huffman algorithm, it is then possible to recreate the entire table (symbol and code values) from just the bit-lengths. Unused symbols are normally transmitted as having zero bit length. Another efficient way representing the codebook is to list all symbols in increasing order by their bit-lengths, and record the number of symbols for each bit-length. For the example mentioned above, the encoding becomes: This means that the first symbolBis of length 1, then theAof length 2, and remaining 2 symbols (C and D) of length 3. Since the symbols are sorted by bit-length, we can efficiently reconstruct the codebook. Apseudo codedescribing the reconstruction is introduced on the next section. This type of encoding is advantageous when only a few symbols in the alphabet are being compressed. For example, suppose the codebook contains only 4 lettersC,O,DandE, each of length 2. To represent the letterOusing the previous method, we need to either add a lot of zeros (Method1): or record which 4 letters we have used. Each way makes the description longer than the following (Method2): TheJPEG File Interchange Formatuses Method2 of encoding, because at most only 162 symbols out of the8-bitalphabet, which has size 256, will be in the codebook. Given a list of symbols sorted by bit-length, the followingpseudocodewill print a canonical Huffman code book: [1][2]
https://en.wikipedia.org/wiki/Canonical_Huffman_code
Session replayis the ability to replay a visitor's journey on aweb siteor within amobile applicationorweb application. Replay can include the user's view (browser or screen output), user input (keyboardandmouse inputs), andlogs of network eventsor console logs. Session replay is supposed to help improvecustomer experience[1]and help identify obstacles in conversion processes on websites. However, it can also be used to study a website'susability,customer behavior, and the handling ofcustomer servicequestions as the customer journey, with all interactions, can be replayed. Some organizations also use this capability to analyse fraudulent behavior on websites. Some solutions augment the session replay with advanced analytics that can identify segments of customers that are struggling to use the website.[2]This allows for the replay capability to be used much more efficiently and reduces the need to replay other customer sessions unnecessarily. There are generally two ways to capture and replay visitor sessions, client side andtag-free server side. There are many tag-based solutions that offer video-like replay of a visitor's session. While replay is analogous to video, it is more accurately a reproduction of a specific user's experience detailing mouse movements,clicks, taps, and scrolls. The underlying data for the session recordings is captured by tagging pages. Some advanced tools are able to access theDocument Object Model (DOM)directly and can play back most interactions within the DOM including all mutations with a high degree of accuracy. There are a number of tools that provide similar functions, with the advantage of being able to replay the entire client experience in a movie-like format. It also can deal with modernsingle-page applications. The disadvantage is that the tracking script can easily be detected and blocked by anyad blockerwhich has become normal (2017: 615M devices with active adblock).[3] These solutions capture all website traffic and replay every visitor interaction from every device, and these include all mobile users from any location. Sessions are replayed step-by-step, providing the ability to search, locate and analyze aspects of a visitor's session including clicks andformentry. Server-side solutions require hardware and software to be installed "on premises." An advantage of server-side recording is that the solution cannot be blocked. However, one will not be able to see a video-like replay of client-side activities such as scrolling and mouse movements. This also poorly handles modern single-page applications. A hybrid approach combines the advantages without the weaknesses. The hybrid approach ensures that every session is recorded (important for compliance) by server-side capturing and enriched with client-side tracking data of mouse movements, clicks, scrolling, keystrokes, and user behavior (driven by customer experience insights). This approach works very well with modern single-page applications. There is the presence of a movie-like replay and 100% compliant capturing. This can be deployed either "on premises" or asSoftware as a service(SaaS). All of the tools listed below are available as Software as a service (SaaS) solutions.
https://en.wikipedia.org/wiki/Session_replay
TheInternational Organization for Standardization(ISO/ˈaɪsoʊ/;[3]French:Organisation internationale de normalisation;Russian:Международная организация по стандартизации) is an independent,non-governmental,international standarddevelopment organization composed of representatives from the nationalstandards organizationsof member countries.[4][5] Membership requirements are given in Article 3 of the ISO Statutes.[6] ISO was founded on 23 February 1947, and (as of July 2024[update]) it has published over 25,000 international standards covering almost all aspects of technology and manufacturing. It has over 800 technical committees (TCs) and subcommittees (SCs) to take care of standards development.[7] The organization develops and publishesinternational standardsin technical and nontechnical fields, including everything from manufactured products and technology to food safety, transport, IT, agriculture, and healthcare.[7][8][9][10]More specialized topics likeelectricalandelectronic engineeringare instead handled by theInternational Electrotechnical Commission.[11]It is headquartered inGeneva, Switzerland.[7]The threeofficial languagesof ISO areEnglish,French, andRussian.[2] The International Organization for Standardization in French isOrganisation internationale de normalisationand in Russian,Международная организация по стандартизации(Mezhdunarodnaya organizatsiya po standartizatsii). Although one might thinkISOis an abbreviation for "International Standardization Organization" or a similar title in another language, the letters do not officially represent anacronymorinitialism. The organization provides this explanation of the name: Because 'International Organization for Standardization' would have different acronyms in different languages (IOS in English, OIN in French), our founders decided to give it the short formISO.ISOis derived from the Greek wordisos(ίσος, meaning "equal"). Whatever the country, whatever the language, the short form of our name is alwaysISO.[7] During the founding meetings of the new organization, however, the Greek word explanation was not invoked, so this meaning may be afalse etymology.[12] Both the nameISOand the ISO logo are registered trademarks and their use is restricted.[13] The organization that is known today as ISO began in 1926 as theInternational Federation of the National Standardizing Associations(ISA), which primarily focused onmechanical engineering. The ISA was suspended in 1942 duringWorld War IIbut, after the war, the ISA was approached by the recently-formedUnited NationsStandards Coordinating Committee (UNSCC) with a proposal to form a newglobal standardsbody.[14] In October 1946, ISA and UNSCC delegates from 25 countries met inLondonand agreed to join forces to create the International Organization for Standardization. The organization officially began operations on 23 February 1947.[15][16] ISO Standards were originally known asISO Recommendations(ISO/R), e.g., "ISO 1" was issued in 1951 as "ISO/R 1".[17] ISO is a voluntary organization whose members are recognized authorities on standards, each one representing one country. Members meet annually at a General Assembly to discuss the strategic objectives of ISO. The organization is coordinated by a central secretariat based inGeneva.[18] A council with a rotating membership of 20 member bodies provides guidance and governance, including setting the annual budget of the central secretariat.[18][19] The technical management board is responsible for more than 250technical committees, who develop the ISO standards.[18][20][21][22] ISO has a joint technical committee (JTC) with theInternational Electrotechnical Commission(IEC) to develop standards relating toinformation technology(IT). Known asJTC 1and entitled "Information technology", it was created in 1987 and its mission is "to develop worldwideInformation and Communication Technology(ICT) standards for business and consumer applications."[23][24] There was previously also a JTC 2 that was created in 2009 for a joint project to establish common terminology for "standardization in the field of energy efficiency and renewable energy sources".[25]It was later disbanded. As of 2022[update], there are 167national membersrepresenting ISO in their country, with each country having only one member.[7][26] ISO has three membership categories,[1] Participating members are called "P" members, as opposed to observing members, who are called "O" members. ISO is funded by a combination of:[27] International standards are the main products of ISO. It also publishes technical reports, technical specifications, publicly available specifications, technicalcorrigenda(corrections), and guides.[28][29] International standards Technical reports For example: Technical and publicly available specifications For example: Technical corrigenda ISO guides For example: ISO documents have strict copyright restrictions and ISO charges for most copies. As of 2020[update], the typical cost of a copy of an ISO standard is aboutUS$120or more (and electronic copies typically have a single-user license, so they cannot be shared among groups of people).[31]Some standards by ISO and its official U.S. representative (and, via the U.S. National Committee, theInternational Electrotechnical Commission) are made freely available.[32][33] A standard published by ISO/IEC is the last stage of a long process that commonly starts with the proposal of new work within a committee. Some abbreviations used for marking a standard with its status are:[34][35][36][37][38][39][40] Abbreviations used for amendments are:[34][35][36][37][38][39][40][41] Other abbreviations are:[38][39][41][42] International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a process with six steps:[36][43] The TC/SC may set upworking groups(WG) of experts for the preparation of a working drafts. Subcommittees may have several working groups, which may have several Sub Groups (SG).[44] It is possible to omit certain stages, if there is a document with a certain degree of maturity at the start of a standardization project, for example, a standard developed by another organization. ISO/IEC directives also allow the so-called "Fast-track procedure". In this procedure, a document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies or as a final draft International Standard (FDIS), if the document was developed by an international standardizing body recognized by the ISO Council.[36] The first step, a proposal of work (New Proposal), is approved at the relevant subcommittee or technical committee (e.g., SC 29 and JTC 1 respectively in the case of MPEG, theMoving Picture Experts Group). A working group (WG) of experts is typically set up by the subcommittee for the preparation of a working draft (e.g., MPEG is a collection of seven working groups as of 2023). When the scope of a new work is sufficiently clarified, some of the working groups may make an open request for proposals—known as a "call for proposals". The first document that is produced, for example, for audio and video coding standards is called a verification model (VM) (previously also called a "simulation and test model"). When a sufficient confidence in the stability of the standard under development is reached, a working draft (WD) is produced. This is in the form of a standard, but is kept internal to working group for revision. When a working draft is sufficiently mature and the subcommittee is satisfied that it has developed an appropriate technical document for the problem being addressed, it becomes a committee draft (CD) and is sent to the P-member national bodies of the SC for the collection of formal comments. Revisions may be made in response to the comments, and successive committee drafts may be produced and circulated until consensus is reached to proceed to the next stage, called the "enquiry stage". After a consensus to proceed is established, the subcommittee will produce a draft international standard (DIS), and the text is submitted to national bodies for voting and comment within a period of five months. A document in the DIS stage is available to the public for purchase and may be referred to with its ISO DIS reference number.[45] Following consideration of any comments and revision of the document, the draft is then approved for submission as a Final Draft International Standard (FDIS) if a two-thirds majority of the P-members of the TC/SC are in favour and if not more than one-quarter of the total number of votes cast are negative. ISO will then hold a ballot among the national bodies where no technical changes are allowed (a yes/no final approval ballot), within a period of two months. It is approved as an International Standard (IS) if a two-thirds majority of the P-members of the TC/SC is in favour and not more than one-quarter of the total number of votes cast are negative. After approval, the document is published by the ISOcentral secretariat, with only minor editorial changes introduced in the publication process before the publication as an International Standard.[34][36] Except for a relatively small number of standards,[32]ISO standards are not available free of charge, but rather for a purchase fee,[46]which has been seen by some as unaffordable for smallopen-sourceprojects.[47] The process of developing standards within ISO was criticized around 2007 as being too difficult for timely completion of large and complex standards, and some members were failing to respond to ballots, causing problems in completing the necessary steps within the prescribed time limits. In some cases, alternative processes have been used to develop standards outside of ISO and then submit them for its approval. A more rapid "fast-track" approval procedure was used inISO/IEC JTC 1for thestandardization of Office Open XML(OOXML, ISO/IEC 29500, approved in April 2008), and another rapid alternative "publicly available specification" (PAS) process had been used byOASISto obtain approval ofOpenDocumentas an ISO/IEC standard (ISO/IEC 26300, approved in May 2006).[48] As was suggested at the time by Martin Bryan, the outgoing convenor (chairman) of working group 1 (WG1) ofISO/IEC JTC 1/SC 34, the rules of ISO were eventually tightened so that participating members that fail to respond to votes are demoted to observer status. The computer security entrepreneur andUbuntufounder,Mark Shuttleworth, was quoted in aZDNetblog article in 2008 about the process of standardization of OOXML as saying: "I think it de-values the confidence people have in the standards setting process", and alleged that ISO did not carry out its responsibility. He also said thatMicrosofthad intensely lobbied many countries that traditionally had not participated in ISO and stacked technical committees with Microsoft employees, solution providers, and resellers sympathetic to Office Open XML:[49] When you have a process built on trust and when that trust is abused, ISO should halt the process... ISO is an engineeringold boys cluband these things are boring so you have to have a lot of passion ... then suddenly you have an investment of a lot of money and lobbying and you get artificial results. The process is not set up to deal with intensive corporatelobbyingand so you end up with something being a standard that is not clear. International Workshop Agreements (IWAs) are documents that establish a collaboration agreement that allow "key industry players to negotiate in an open workshop environment" outside of ISO in a way that may eventually lead to development of an ISO standard.[42] On occasion, the fact that many of the ISO-created standards are ubiquitous has led to common use of "ISO" to describe the product that conforms to a standard. Some examples of this are: ISO presents several awards to acknowledge the valuable contributions made in the realm of international standardization:[50] Some of the 834 Technical Committees of the International Organization for Standardization (ISO) include:[7]
https://en.wikipedia.org/wiki/International_Organization_for_Standardization
Pseudonymous Bosch(/ˈsuːdənɪməsbɒʃ,bɔːʃ,bɔːs/) is thepen nameofRaphael Simon(born October 25, 1967), the author ofThe Secret SeriesandThe Bad Booksseries of fiction books, as well asThe Unbelievable Oliverchapter book mysteries and two stand-alone titles. He has written 12 books, each widely read.[1] Simon was born on October 25, 1967, to writers Dyanne Asimow and Roger L. Simon.[note 1][2]He was born inLos Angeles County, California.[4]His brother, Jesse, is a visual artist.[2]He also has a significantly younger half-sister, Madeleine, from his father's third marriage.[5] Simon attended Yale,[6]where hecame outas gay when he was 20 years old.[7]Later he earned an MA in Comparative Literature from UC Irvine.[citation needed]He went on to teach courses about detective fiction, composition, and fiction for young readers at various colleges and universities in California.[8]He currently lives inPasadena, California, with his husband, Phillip de Leon.[9]They have twin children, who were born in 2007.[9] Bosch had long been suspected to be the author Raphael Simon, although Bosch disputed this until he revealed himself as Simon in a May 8, 2016, editorial inThe New York Times.[10] The pseudonym is a play on that of the artistHieronymus Bosch.[11]It also may play off the fictionalLos Angelesdetective,Hieronymous "Harry" Bosch, also named after the artist, created by the authorMichael Connelly, and who has appeared in several of his novels starting in 1992.[citation needed] Prior to becoming a novelist, Simon worked as a screenwriter, including as a staff writer on the Nickelodeon seriesRocket Power. He started writing his first novel,The Name of this Book Is Secret, as a series of letters to a fourth-grader. It was published in 2007, and was nominated for an Edgar Allan Poe award for best juvenile mystery. A sequel followed in 2008:If You're Reading This It's Too Late. Eventually there would be five titles in the Secret Series. TheNew York Timesbestselling series has sold millions of copies and has been translated into many languages.[citation needed] In 2013, Bosch publishedWrite This Book!, ado it yourselfbook; he calls it "a book that readers will write for me". Bosch elaborated in an interview withWiredstating that "it is a kind of half-written, guided mystery. Parts of it are going to be multiple choice, choose-your-own adventure, parts of it will be more likeMad Libs, and some silly stuff".[12] The following year, Bosch returned readers to the world of theSecret SeriesinBad Magic, the first novel in what became theBad Bookstrilogy. On May 14, 2019, Bosch publishedThe Unbelievable Oliver and the Four Jokers, with illustrations by Shane Pangburn. The book is about an eight year-old boy who longs to be a professional magician.[13]A followup,The Unbelievable Oliver and the Sawed-in-Half Dads, was released on May 12, 2020.[14]In 2021, Bosch publishedThe Anti-Book, his first book under his real name Raphael Simon.[15]
https://en.wikipedia.org/wiki/Pseudonymous_Bosch
Innumericallinear algebra, theArnoldi iterationis aneigenvalue algorithmand an important example of aniterative method. Arnoldi finds an approximation to theeigenvaluesandeigenvectorsof general (possibly non-Hermitian)matricesby constructing an orthonormal basis of theKrylov subspace, which makes it particularly useful when dealing with largesparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-calleddirect methodswhich must complete to give any useful results (see for example,Householder transformation). The partial result in this case being the first few vectors of the basis the algorithm is building. When applied to Hermitian matrices it reduces to theLanczos algorithm. The Arnoldi iteration was invented byW. E. Arnoldiin 1951.[1] An intuitive method for finding the largest (in absolute value) eigenvalue of a givenm×mmatrixA{\displaystyle A}is thepower iteration: starting with an arbitrary initialvectorb, calculateAb,A2b,A3b, ...normalizing the result after every application of the matrixA. This sequence converges to theeigenvectorcorresponding to the eigenvalue with the largest absolute value,λ1{\displaystyle \lambda _{1}}. However, much potentially useful computation is wasted by using only the final result,An−1b{\displaystyle A^{n-1}b}. This suggests that instead, we form the so-calledKrylov matrix: The columns of this matrix are not in generalorthogonal, but we can extract an orthogonalbasis, via a method such asGram–Schmidt orthogonalization. The resulting set of vectors is thus an orthogonal basis of theKrylov subspace,Kn{\displaystyle {\mathcal {K}}_{n}}. We may expect the vectors of this basis tospangood approximations of the eigenvectors corresponding to then{\displaystyle n}largest eigenvalues, for the same reason thatAn−1b{\displaystyle A^{n-1}b}approximates the dominant eigenvector. The Arnoldi iteration uses themodified Gram–Schmidt processto produce a sequence of orthonormal vectors,q1,q2,q3, ..., called theArnoldi vectors, such that for everyn, the vectorsq1, ...,qnspan the Krylov subspaceKn{\displaystyle {\mathcal {K}}_{n}}. Explicitly, the algorithm is as follows: Thej-loop projects out the component ofqk{\displaystyle q_{k}}in the directions ofq1,…,qk−1{\displaystyle q_{1},\dots ,q_{k-1}}. This ensures the orthogonality of all the generated vectors. The algorithm breaks down whenqkis the zero vector. This happens when theminimal polynomialofAis of degreek. In most applications of the Arnoldi iteration, including the eigenvalue algorithm below andGMRES, the algorithm has converged at this point. Every step of thek-loop takes one matrix-vector product and approximately 4mkfloating point operations. In the programming languagePythonwith support of theNumPylibrary: LetQndenote them-by-nmatrix formed by the firstnArnoldi vectorsq1,q2, ...,qn, and letHnbe the (upperHessenberg) matrix formed by the numbershj,kcomputed by the algorithm: The orthogonalization method has to be specifically chosen such that the lower Arnoldi/Krylov components are removed from higher Krylov vectors. AsAqi{\displaystyle Aq_{i}}can be expressed in terms ofq1, ...,qi+1by construction, they are orthogonal toqi+2, ...,qn, We then have The matrixHncan be viewed asAin the subspaceKn{\displaystyle {\mathcal {K}}_{n}}with the Arnoldi vectors as an orthogonal basis;Ais orthogonally projected ontoKn{\displaystyle {\mathcal {K}}_{n}}. The matrixHncan be characterized by the following optimality condition. Thecharacteristic polynomialofHnminimizes ||p(A)q1||2among allmonic polynomialsof degreen. This optimality problem has a unique solution if and only if the Arnoldi iteration does not break down. The relation between theQmatrices in subsequent iterations is given by where is an (n+1)-by-nmatrix formed by adding an extra row toHn. The idea of the Arnoldi iteration as aneigenvalue algorithmis to compute the eigenvalues in the Krylov subspace. The eigenvalues ofHnare called theRitz eigenvalues. SinceHnis a Hessenberg matrix of modest size, its eigenvalues can be computed efficiently, for instance with theQR algorithm, or somewhat related,Francis' algorithm. Also Francis' algorithm itself can be considered to be related to power iterations, operating on nested Krylov subspace. In fact, the most basic form of Francis' algorithm appears to be to choosebto be equal toAe1, and extendingnto the full dimension ofA. Improved versions include one or more shifts, and higher powers ofAmay be applied in a single steps.[2] This is an example of theRayleigh-Ritz method. It is often observed in practice that some of the Ritz eigenvalues converge to eigenvalues ofA. SinceHnisn-by-n, it has at mostneigenvalues, and not all eigenvalues ofAcan be approximated. Typically, the Ritz eigenvalues converge to the largest eigenvalues ofA. To get the smallest eigenvalues ofA, the inverse (operation) ofAshould be used instead. This can be related to the characterization ofHnas the matrix whose characteristic polynomial minimizes ||p(A)q1|| in the following way. A good way to getp(A) small is to choose the polynomialpsuch thatp(x) is small wheneverxis an eigenvalue ofA. Hence, the zeros ofp(and thus the Ritz eigenvalues) will be close to the eigenvalues ofA. However, the details are not fully understood yet. This is in contrast to the case whereAisHermitian. In that situation, the Arnoldi iteration becomes theLanczos iteration, for which the theory is more complete. Due to practical storage consideration, common implementations of Arnoldi methods typically restart after a fixed number of iterations. One approach is the Implicitly Restarted Arnoldi Method (IRAM)[3]by Lehoucq and Sorensen, which was popularized in the free and open source software packageARPACK.[4]Another approach is the Krylov-Schur Algorithm by G. W. Stewart, which is more stable and simpler to implement than IRAM.[5] Thegeneralized minimal residual method(GMRES) is a method for solvingAx=bbased on Arnoldi iteration.
https://en.wikipedia.org/wiki/Arnoldi_iteration
Metaprogrammingis acomputer programmingtechnique in whichcomputer programshave the ability to treat other programs as theirdata. It means that a program can be designed to read, generate, analyse, or transform other programs, and even modify itself, while running.[1][2]In some cases, this allows programmers to minimize the number of lines of code to express a solution, in turn reducing development time.[3]It also allows programs more flexibility to efficiently handle new situations with no recompiling. Metaprogramming can be used to move computations fromruntimetocompile time, to generate code usingcompile time computations, and to enableself-modifying code. The ability of aprogramming languageto be its ownmetalanguageallowsreflective programming, and is termedreflection.[4]Reflection is a valuable language feature to facilitate metaprogramming. Metaprogramming was popular in the 1970s and 1980s using list processing languages such asLisp.Lisp machinehardware gained some notice in the 1980s, and enabled applications that could process code. They were often used forartificial intelligenceapplications. Metaprogramming enables developers to write programs and develop code that falls under thegeneric programmingparadigm. Having the programming language itself as afirst-class data type(as inLisp,Prolog,SNOBOL, orRebol) is also very useful; this is known ashomoiconicity.Generic programminginvokes a metaprogramming facility within a language by allowing one to write code without the concern of specifying data types since they can be supplied asparameterswhen used. Metaprogramming usually works in one of three ways.[5] Lispis probably the quintessential language with metaprogramming facilities, both because of its historical precedence and because of the simplicity and power of its metaprogramming. In Lisp metaprogramming, the unquote operator (typically a comma) introduces code that isevaluated at program definition timerather than at run time. The metaprogramming language is thus identical to the host programming language, and existing Lisp routines can be directly reused for metaprogramming if desired. This approach has been implemented in other languages by incorporating an interpreter in the program, which works directly with the program's data. There are implementations of this kind for some common high-level languages, such asRemObjects’Pascal ScriptforObject Pascal. A simple example of a metaprogram is thisPOSIX Shellscript, which is an example ofgenerative programming: This script (or program) generates a new 993-line program that prints out the numbers 1–992. This is only an illustration of how to use code to write more code; it is not the most efficient way to print out a list of numbers. Nonetheless, a programmer can write and execute this metaprogram in less than a minute, and will have generated over 1000 lines of code in that amount of time. Aquineis a special kind of metaprogram that produces its own source code as its output. Quines are generally of recreational or theoretical interest only. Not all metaprogramming involves generative programming. If programs are modifiable at runtime, or if incremental compiling is available (such as inC#,Forth,Frink,Groovy,JavaScript,Lisp,Elixir,Lua,Nim,Perl,PHP,Python,Rebol,Ruby,Rust,R,SAS,Smalltalk, andTcl), then techniques can be used to perform metaprogramming without generating source code. One style of generative approach is to employdomain-specific languages(DSLs). A fairly common example of using DSLs involves generative metaprogramming:lexandyacc, two tools used to generatelexical analysersandparsers, let the user describe the language usingregular expressionsandcontext-free grammars, and embed the complex algorithms required to efficiently parse the language. One usage of metaprogramming is to instrument programs in order to dodynamic program analysis. Some argue that there is a sharp learning curve to make complete use of metaprogramming features.[8]Since metaprogramming gives more flexibility and configurability at runtime, misuse or incorrect use of metaprogramming can result in unwarranted and unexpected errors that can be extremely difficult to debug to an average developer. It can introduce risks in the system and make it more vulnerable if not used with care. Some of the common problems, which can occur due to wrong use of metaprogramming are inability of the compiler to identify missing configuration parameters, invalid or incorrect data can result in unknown exception or different results.[9]Due to this, some believe[8]that only high-skilled developers should work on developing features which exercise metaprogramming in a language or platform and average developers must learn how to use these features as part of convention. TheIBM/360and derivatives had powerfulmacro assemblerfacilities that were often used to generate completeassembly languageprograms[citation needed]or sections of programs (for different operating systems for instance). Macros provided withCICStransaction processingsystem had assembler macros that generatedCOBOLstatements as a pre-processing step. Other assemblers, such asMASM, also support macros. Metaclassesare provided by the following programming languages: Use ofdependent typesallows proving that generated code is never invalid.[15]However, this approach is leading-edge and rarely found outside of research programming languages. The list of notable metaprogramming systems is maintained atList of program transformation systems.
https://en.wikipedia.org/wiki/Metaprogramming
Cognitive biasesare systematic patterns of deviation from norm and/or rationality in judgment.[1][2]They are often studied inpsychology,sociologyandbehavioral economics.[1] Although the reality of most of these biases is confirmed byreproducibleresearch,[3][4]there are often controversies about how to classify these biases or how to explain them.[5]Severaltheoretical causes are known for some cognitive biases, which provides a classification of biases by their common generative mechanism (such as noisy information-processing[6]).Gerd Gigerenzerhas criticized the framing of cognitive biases as errors in judgment, and favors interpreting them as arising from rational deviations from logical thought.[7] Explanations include information-processing rules (i.e., mental shortcuts), calledheuristics, that the brain uses to producedecisionsor judgments. Biases have a variety of forms and appear as cognitive ("cold") bias, such as mental noise,[6]or motivational ("hot") bias, such as when beliefs are distorted bywishful thinking. Both effects can be present at the same time.[8][9] There are also controversies over some of these biases as to whether they count as useless orirrational, or whether they result in useful attitudes or behavior. For example, when getting to know others, people tend to askleading questionswhich seem biased towards confirming their assumptions about the person. However, this kind ofconfirmation biashas also been argued to be an example ofsocial skill; a way to establish a connection with the other person.[10] Although this research overwhelmingly involves human subjects, some studies have found bias in non-human animals as well. For example,loss aversionhas been shown in monkeys andhyperbolic discountinghas been observed in rats, pigeons, and monkeys.[11] These biases affect belief formation, reasoning processes, business and economic decisions, and human behavior in general. The anchoring bias, or focalism, is the tendency to rely too heavily—to "anchor"—on one trait or piece of information when making decisions (usually the first piece of information acquired on that subject).[12][13]Anchoring bias includes or involves the following: The tendency to perceive meaningful connections between unrelated things.[18]The following are types of apophenia: The availability heuristic (also known as the availability bias) is the tendency to overestimate the likelihood of events with greater "availability" in memory, which can be influenced by how recent the memories are or how unusual or emotionally charged they may be.[22]The availability heuristic includes or involves the following: Cognitive dissonance is the perception of contradictory information and the mental toll of it. Confirmation bias is the tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions.[35]There are multiple other cognitive biases which involve or are types of confirmation bias: Egocentric bias is the tendency to rely too heavily on one's own perspective and/or have a different perception of oneself relative to others.[38]The following are forms of egocentric bias: Extension neglect occurs where the quantity of the sample size is not sufficiently taken into consideration when assessing the outcome, relevance or judgement. The following are forms of extension neglect: False priors are initial beliefs and knowledge which interfere with the unbiased evaluation of factual evidence and lead to incorrect conclusions. Biases based on false priors include: The framing effect is the tendency to draw different conclusions from the same information, depending on how that information is presented. Forms of the framing effect include: The following relate to prospect theory: Association fallacies include: Attribution bias includes: Conformity is involved in the following: Ingroup bias is the tendency for people to give preferential treatment to others they perceive to be members of their own groups. It is related to the following: Inpsychologyandcognitive science, a memory bias is acognitive biasthat either enhances or impairs the recall of amemory(either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including: The misattributions include:
https://en.wikipedia.org/wiki/List_of_cognitive_biases
Thesoftware industryincludes businesses fordevelopment,maintenanceandpublicationofsoftwarethat are using differentbusiness models, mainly either "license/maintenance based" (on-premises) or "Cloudbased" (such asSaaS,PaaS,IaaS, MBaaS, MSaaS, DCaaS etc.). The industry also includessoftware services, such astraining,documentation, consulting and data recovery. The software and computer services industry spends more than 11% of its net sales for Research & Development which is in comparison with other industries the second highest share after pharmaceuticals & biotechnology.[1] The first company founded to provide software products and services wasComputer Usage Companyin 1955.[2]Before that time, computers were programmed either by customers, or the few commercial computer vendors of the time, such asSperry RandandIBM. Thesoftwareindustry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities. Universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers. Some were distributed freely between users of a particular machine for no charge. Others were done on a commercial basis, and other firms such asComputer Sciences Corporation(founded in 1959) started to grow. Other influential or typical software companies begun in the early 1960s includedAdvanced Computer Techniques,Automatic Data Processing,Applied Data Research, andInformatics General.[3][4]The computer/hardwaremakers started bundlingoperating systems,systems softwareand programming environments with their machines. WhenDigital Equipment Corporation(DEC) brought a relatively low-pricedmicrocomputerto market, it brought computing within the reach of many more companies and universities worldwide, and it spawned great innovation in terms of new, powerful programming languages and methodologies. New software was built for microcomputers, so other manufacturers including IBM, followed DEC's example quickly, resulting in theIBM AS/400amongst others. The industry expanded greatly with the rise of thepersonal computer("PC") in the mid-1970s, which brought desktop computing to the office worker for the first time. In the following years, it also created a growing market for games, applications, and utilities.DOS,Microsoft's firstoperating systemproduct, was the dominant operating system at the time. In the early years of the 21st century, another successfulbusiness modelhas arisen for hosted software, calledsoftware-as-a-service, or SaaS; this was at least the third time[citation needed]this model had been attempted. From the point of view of producers of someproprietary software, SaaS reduces the concerns aboutunauthorized copying, since it can only be accessed through the Web, and by definition noclient softwareis loaded onto the end user's PC. Market research firm Gartner estimates the global market for IT spending in 2024 at $3.73 trillion. If telecoms services are included, this will rise to $5.26 trillion.[5]Major companies include Microsoft,HP,Oracle,Delland IBM.[6] The software industry has been subject to a high degree of consolidation over the past couple of decades. Between 1995 and 2018 around 37,039mergers and acquisitionshave been announced with a total known value of US$1,166 billion.[7]The highest number and value of deals was set in 2000 during the high times of thedot-com bubblewith 2,674 transactions valued at US$105 billion. In 2017, 2,547 deals were announced valued at US$111 billion. Approaches to successfully acquire and integrate software companies are available.[8] Software industry business models include SaaS (subscription-based), PaaS (platform services), IaaS (infrastructure services), and freemium (free with premium features). Others are perpetual licenses (one-time fee), ad-supported (free with ads), open source (free with paid support), pay-per-use (usage-based), and consulting/customization services. Hybrid models combine multiple approaches. Business models of software companies have been widely discussed.[9][10]Network effectsinsoftware ecosystems, networks of companies, and their customers are an important element in the strategy of software companies.[11]
https://en.wikipedia.org/wiki/Software_industry
Web-based simulation(WBS) is the invocation ofcomputer simulationservices over theWorld Wide Web, specifically through aweb browser.[1][2][3][4]Increasingly, the web is being looked upon as an environment for providingmodeling and simulationapplications, and as such, is an emerging area of investigation within the simulation community.[4][5][6] Web-based simulation is used in several contexts: Web-based simulation can take place either on the server side or on the client side. Inserver-side simulation, the numerical calculations andvisualization(generation of plots and other computer graphics) is carried out on the web server, while the interactivegraphical user interface(GUI) often partly is provided by the client-side, for example usingserver-side scriptingsuch asPHPorCGI scripts, interactive services based onAjaxor a conventional application software remotely accessed through aVNCJava applet. Inclient-side simulation, the simulation program is downloaded from the server side but completely executed on the client side, for example usingJava applets,Flash animations,JavaScript, or some mathematical software viewer plug-in. Server-side simulation is not scalable for many simultaneous users, but places fewer demands on the user computer performance and web-browser plug-ins than client-side simulation. The termon-line simulationsometimes refers to server-side web-based simulation, sometimes tosymbioticsimulation, i.e. a simulation that interacts in real-time with a physical system. The upcomingcloud-computingtechnologies can be used for new server-side simulation approaches. For instance, there are[example needed]multi-agent-simulationapplications which are deployed on cloud-computing instances and act independently. This allows simulations to be highly scalable.[clarification needed]
https://en.wikipedia.org/wiki/Web-based_simulation
Signals intelligence(SIGINT) is the act and field ofintelligence-gatheringby interception ofsignals, whethercommunicationsbetween people (communications intelligence—abbreviated toCOMINT) or from electronic signals not directly used in communication (electronic intelligence—abbreviated toELINT).[1]Asclassifiedandsensitive informationis usuallyencrypted, signals intelligence may necessarily involvecryptanalysis(to decipher the messages).Traffic analysis—the study of who is signaling to whom and in what quantity—is also used to integrate information, and it may complement cryptanalysis.[citation needed] Electronic interceptions appeared as early as 1900, during theBoer Warof 1899–1902. The BritishRoyal Navyhad installed wireless sets produced byMarconion board their ships in the late 1890s, and theBritish Armyused some limited wireless signalling. TheBoerscaptured some wireless sets and used them to make vital transmissions.[2]Since the British were the only people transmitting at the time, the British did not need special interpretation of the signals that they were.[3] The birth of signals intelligence in a modern sense dates from theRusso-Japanese Warof 1904–1905. As the Russian fleet prepared for conflict with Japan in 1904, the British shipHMSDianastationed in theSuez Canalintercepted Russian naval wireless signals being sent out for the mobilization of the fleet, for the first time in history.[4][ambiguous] Over the course of theFirst World War, a new method of signals intelligence reached maturity.[5]Russia’s failure to properly protect its communications fatally compromised theRussian Army’sadvance early in World War Iand led to their disastrous defeat by the Germans underLudendorffandHindenburgat theBattle of Tannenberg. In 1918, French intercept personnel captured a message written in the newADFGVX cipher, which was cryptanalyzed byGeorges Painvin. This gave the Allies advance warning of the German 1918Spring Offensive. The British in particular, built up great expertise in the newly emerging field of signals intelligence and codebreaking (synonymous with cryptanalysis). On the declaration of war, Britain cut all German undersea cables.[6]This forced the Germans to communicate exclusively via either (A) a telegraph line that connected through the British network and thus could be tapped; or (B) through radio which the British could then intercept.[7]Rear AdmiralHenry OliverappointedSir Alfred Ewingto establish an interception and decryption service at theAdmiralty;Room 40.[7]An interception service known as'Y' service, together with thepost officeandMarconistations, grew rapidly to the point where the British could intercept almost all official German messages.[7] The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of theHigh Seas Fleet, to infer from the routes they chose where defensive minefields had been placed and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place, and a warning could be given. Detailed information about submarine movements was also available.[7] The use of radio-receiving equipment to pinpoint the location of any single transmitter was also developed during the war. CaptainH.J. Round, working forMarconi, began carrying out experiments withdirection-findingradio equipment forthe army in Francein 1915. By May 1915, the Admiralty was able to track German submarines crossing the North Sea. Some of these stations also acted as 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports.[7] Room 40 played an important role in several naval engagements during the war, notably in detecting major German sorties into theNorth Sea. Thebattle of Dogger Bankwas won in no small part due to the intercepts that allowed the Navy to position its ships in the right place.[8]It played a vital role in subsequent naval clashes, including at theBattle of Jutlandas the British fleet was sent out to intercept them. The direction-finding capability allowed for the tracking and location of German ships, submarines, andZeppelins. The system was so successful that by the end of the war, over 80 million words, comprising the totality of German wireless transmission over the course of the war, had been intercepted by the operators of theY-stationsand decrypted.[9]However, its most astonishing success was indecryptingtheZimmermann Telegram, atelegramfrom the German Foreign Office sent via Washington to itsambassadorHeinrich von Eckardtin Mexico. With the importance of interception and decryption firmly established by the wartime experience, countries established permanent agencies dedicated to this task in the interwar period. In 1919, the British Cabinet's Secret Service Committee, chaired byLord Curzon, recommended that a peace-time codebreaking agency should be created.[10]TheGovernment Code and Cypher School(GC&CS) was the first peace-time codebreaking agency, with a public function "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also with a secret directive to "study the methods of cypher communications used by foreign powers".[11]GC&CS officially formed on 1 November 1919, and produced its first decrypt on 19 October.[10][12]By 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems.[13] TheUS Cipher Bureauwas established in 1919 and achieved some success at theWashington Naval Conferencein 1921, through cryptanalysis byHerbert Yardley. Secretary of WarHenry L. Stimsonclosed the US Cipher Bureau in 1929 with the words "Gentlemen do not read each other's mail." The use of SIGINT had even greater implications duringWorld War II. The combined effort of intercepts and cryptanalysis for the whole of the British forces in World War II came under the code name "Ultra", managed fromGovernment Code and Cypher SchoolatBletchley Park. Properly used, the GermanEnigmaandLorenz ciphersshould have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities which made Bletchley's attacks feasible. Bletchley's work was essential to defeating theU-boatsin theBattle of the Atlantic, and to the British naval victories in theBattle of Cape Matapanand theBattle of North Cape. In 1941, Ultra exerted a powerful effect on theNorth African desert campaignagainst German forces under GeneralErwin Rommel. General SirClaude Auchinleckwrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". Ultra decrypts featured prominently in the story ofOperation SALAM,László Almásy's mission acrossthe desertbehind Allied lines in 1942.[14]Prior to theNormandy landingson D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eightWestern Frontdivisions. Winston Churchillwas reported to have told KingGeorge VI: "It is thanks to the secret weapon of GeneralMenzies, put into use on all the fronts, that we won the war!" Supreme Allied Commander,Dwight D. Eisenhower, at the end of the war, described Ultra as having been "decisive" to Allied victory.[15]Official historian of British Intelligence in World War IISir Harry Hinsleyargued that Ultra shortened the war "by not less than two years and probably by four years"; and that, in the absence of Ultra, it is uncertain how the war would have ended.[16] At a lower level, German cryptanalysis, direction finding, and traffic analysis were vital to Rommel's early successes in theWestern Desert Campaignuntil British forces tightened their communications discipline and Australian raiders destroyed his principle SIGINT Company.[17] TheUnited States Department of Defensehas defined the term "signals intelligence" as: Being a broad field, SIGINT has many sub-disciplines. The two main ones are communications intelligence (COMINT) and electronic intelligence (ELINT). A collection system has to know to look for a particular signal. "System", in this context, has several nuances. Targeting is the process of developingcollection requirements: First, atmospheric conditions,sunspots, the target's transmission schedule and antenna characteristics, and other factors create uncertainty that a given signal intercept sensor will be able to "hear" the signal of interest, even with a geographically fixed target and an opponent making no attempt to evade interception. Basic countermeasures against interception include frequent changing ofradio frequency,polarization, and other transmission characteristics. An intercept aircraft could not get off the ground if it had to carry antennas and receivers for every possible frequency and signal type to deal with such countermeasures. Second, locating the transmitter's position is usually part of SIGINT.Triangulationand more sophisticatedradio locationtechniques, such astime of arrivalmethods, require multiple receiving points at different locations. These receivers send location-relevant information to a central point, or perhaps to a distributed system in which all participate, such that the information can be correlated and a location computed. Modern SIGINT systems, therefore, have substantial communications among intercept platforms. Even if some platforms are clandestine, there is still a broadcast of information telling them where and how to look for signals.[19]A United States targeting system under development in the late 1990s, PSTS, constantly sends out information that helps the interceptors properly aim their antennas and tune their receivers. Larger intercept aircraft, such as theEP-3orRC-135, have the on-board capability to do some target analysis and planning, but others, such as theRC-12 GUARDRAIL, are completely under ground direction. GUARDRAIL aircraft are fairly small and usually work in units of three to cover a tactical SIGINT requirement, whereas the larger aircraft tend to be assigned strategic/national missions. Before the detailed process of targeting begins, someone has to decide there is a value in collecting information about something. While it would be possible to direct signals intelligence collection at a major sports event, the systems would capture a great deal of noise, news signals, and perhaps announcements in the stadium. If, however, an anti-terrorist organization believed that a small group would be trying to coordinate their efforts using short-range unlicensed radios at the event, SIGINT targeting of radios of that type would be reasonable. Targeting would not know where in the stadium the radios might be located or the exact frequency they are using; those are the functions of subsequent steps such as signal detection and direction finding. Once the decision to target is made, the various interception points need to cooperate, since resources are limited. Knowing what interception equipment to use becomes easier when a target country buys its radars and radios from known manufacturers, or is given them asmilitary aid. National intelligence services keep libraries of devices manufactured by their own country and others, and then use a variety of techniques to learn what equipment is acquired by a given country. Knowledge ofphysicsandelectronic engineeringfurther narrows the problem of what types of equipment might be in use. An intelligence aircraft flying well outside the borders of another country will listen for long-range search radars, not short-range fire control radars that would be used by a mobile air defense. Soldiers scouting the front lines of another army know that the other side will be using radios that must be portable and not have huge antennas. Even if a signal is human communications (e.g., a radio), the intelligence collection specialists have to know it exists. If the targeting function described above learns that a country has a radar that operates in a certain frequency range, the first step is to use a sensitive receiver, with one or more antennas that listen in every direction, to find an area where such a radar is operating. Once the radar is known to be in the area, the next step is to find its location. If operators know the probable frequencies of transmissions of interest, they may use a set of receivers, preset to the frequencies of interest. These are the frequency (horizontal axis) versus power (vertical axis) produced at the transmitter, before any filtering of signals that do not add to the information being transmitted. Received energy on a particular frequency may start a recorder, and alert a human to listen to the signals if they are intelligible (i.e., COMINT). If the frequency is not known, the operators may look for power on primary orsidebandfrequencies using aspectrum analyzer. Information from the spectrum analyzer is then used to tune receivers to signals of interest. For example, in this simplified spectrum, the actual information is at 800 kHz and 1.2 MHz. Real-world transmitters and receivers usually are directional. In the figure to the left, assume that each display is connected to a spectrum analyzer connected to a directional antenna aimed in the indicated direction. Spread-spectrum communications is anelectronic counter-countermeasures(ECCM) technique to defeat looking for particular frequencies. Spectrum analysis can be used in a different ECCM way to identify frequencies not being jammed or not in use. The earliest, and still common, means of direction finding is to use directional antennas asgoniometers, so that a line can be drawn from the receiver through the position of the signal of interest. (SeeHF/DF.) Knowing the compass bearing, from a single point, to the transmitter does not locate it. Where the bearings from multiple points, using goniometry, are plotted on a map, the transmitter will be located at the point where the bearings intersect. This is the simplest case; a target may try to confuse listeners by having multiple transmitters, giving the same signal from different locations, switching on and off in a pattern known to their user but apparently random to the listener. Individual directional antennas have to be manually or automatically turned to find the signal direction, which may be too slow when the signal is of short duration. One alternative is theWullenweberarray technique. In this method, several concentric rings of antenna elements simultaneously receive the signal, so that the best bearing will ideally be clearly on a single antenna or a small set. Wullenweber arrays for high-frequency signals are enormous, referred to as "elephant cages" by their users. A more advance approach isAmplitude comparison. An alternative to tunable directional antennas or large omnidirectional arrays such as the Wullenweber is to measure thetime of arrivalof the signal at multiple points, usingGPSor a similar method to have precise time synchronization. Receivers can be on ground stations, ships, aircraft, or satellites, giving great flexibility. A more accurate approach isInterferometer. Modernanti-radiation missilescan home in on and attack transmitters; military antennas are rarely a safe distance from the user of the transmitter. When locations are known, usage patterns may emerge, from which inferences may be drawn. Traffic analysis is the discipline of drawing patterns from information flow among a set of senders and receivers, whether those senders and receivers are designated by location determined throughdirection finding, by addressee and sender identifications in the message, or evenMASINTtechniques for "fingerprinting" transmitters or operators. Message content other than the sender and receiver is not necessary to do traffic analysis, although more information can be helpful. For example, if a certain type of radio is known to be used only by tank units, even if the position is not precisely determined by direction finding, it may be assumed that a tank unit is in the general area of the signal. The owner of the transmitter can assume someone is listening, so might set up tank radios in an area where he wants the other side to believe he has actual tanks. As part ofOperation Quicksilver, part of thedeceptionplan for the invasion of Europe at theBattle of Normandy, radio transmissions simulated the headquarters and subordinate units of the fictitiousFirst United States Army Group(FUSAG), commanded byGeorge S. Patton, to make the German defense think that the main invasion was to come at another location. In like manner, fake radio transmissions from Japanese aircraft carriers, before theBattle of Pearl Harbor, were made from Japanese local waters, while the attacking ships moved under strict radio silence. Traffic analysis need not focus on human communications. For example, a sequence of a radar signal, followed by an exchange of targeting data and a confirmation, followed by observation of artillery fire, may identify an automatedcounterbattery firesystem. A radio signal that triggers navigational beacons could be a radio landing aid for an airstrip or helicopter pad that is intended to be low-profile. Patterns do emerge. A radio signal with certain characteristics, originating from a fixed headquarters, may strongly suggest that a particular unit will soon move out of its regular base. The contents of the message need not be known to infer the movement. There is an art as well as science of traffic analysis. Expert analysts develop a sense for what is real and what is deceptive.Harry Kidder,[20]for example, was one of the star cryptanalysts of World War II, a star hidden behind the secret curtain of SIGINT.[21] Generating anelectronic order of battle(EOB) requires identifying SIGINT emitters in an area of interest, determining their geographic location or range of mobility, characterizing their signals, and, where possible, determining their role in the broader organizationalorder of battle. EOB covers both COMINT and ELINT.[22]TheDefense Intelligence Agencymaintains an EOB by location. The Joint Spectrum Center (JSC) of theDefense Information Systems Agencysupplements this location database with five more technical databases: For example, several voice transmitters might be identified as the command net (i.e., top commander and direct reports) in a tank battalion or tank-heavy task force. Another set of transmitters might identify the logistic net for that same unit. An inventory of ELINT sources might identify themedium- andlong-rangecounter-artillery radars in a given area. Signals intelligence units will identify changes in the EOB, which might indicate enemy unit movement, changes in command relationships, and increases or decreases in capability. Using the COMINT gathering method enables the intelligence officer to produce an electronic order of battle by traffic analysis and content analysis among several enemy units. For example, if the following messages were intercepted: This sequence shows that there are two units in the battlefield, unit 1 is mobile, while unit 2 is in a higher hierarchical level, perhaps a command post. One can also understand that unit 1 moved from one point to another which are distant from each 20 minutes with a vehicle. If these are regular reports over a period of time, they might reveal a patrol pattern. Direction-finding andradio frequency MASINTcould help confirm that the traffic is not deception. The EOB buildup process is divided as following: Separation of the intercepted spectrum and the signals intercepted from each sensor must take place in an extremely small period of time, in order to separate the different signals to different transmitters in the battlefield. The complexity of the separation process depends on the complexity of the transmission methods (e.g.,hoppingortime-division multiple access(TDMA)). By gathering and clustering data from each sensor, the measurements of the direction of signals can be optimized and get much more accurate than the basic measurements of a standarddirection findingsensor.[23]By calculating larger samples of the sensor's output data in near real-time, together with historical information of signals, better results are achieved. Data fusion correlates data samples from different frequencies from the same sensor, "same" being confirmed by direction finding or radiofrequency MASINT. If an emitter is mobile, direction finding, other than discovering a repetitive pattern of movement, is of limited value in determining if a sensor is unique. MASINT then becomes more informative, as individual transmitters and antennas may have unique side lobes, unintentional radiation, pulse timing, etc. Network build-up, or analysis of emitters (communication transmitters) in a target region over a sufficient period of time, enables creation of the communications flows of a battlefield.[24] COMINT (communicationsintelligence) is a sub-category of signals intelligence that engages in dealing with messages or voice information derived from the interception of foreign communications. COMINT is commonly referred to as SIGINT, which can cause confusion when talking about the broader intelligence disciplines. The USJoint Chiefs of Staffdefines it as "Technical information and intelligence derived from foreign communications by other than the intended recipients".[18] COMINT, which is defined to be communications among people, will reveal some or all of the following: A basic COMINT technique is to listen for voice communications, usually over radio but possibly "leaking" from telephones or from wiretaps. If the voice communications are encrypted, traffic analysis may still give information. In the Second World War, for security the United States used Native American volunteer communicators known ascode talkers, who used languages such asNavajo,ComancheandChoctaw, which would be understood by few people, even in the U.S. Even within these uncommon languages, the code talkers used specialized codes, so a "butterfly" might be a specific Japanese aircraft. British forces made limited use ofWelshspeakers for the same reason. While modern electronic encryption does away with the need for armies to use obscure languages, it is likely that some groups might use rare dialects that few outside their ethnic group would understand. Morse code interception was once very important, butMorse codetelegraphy is now obsolete in the western world, although possibly used by special operations forces. Such forces, however, now have portable cryptographic equipment. Specialists scan radio frequencies for character sequences (e.g., electronic mail) and fax. A given digital communications link can carry thousands or millions of voice communications, especially in developed countries. Without addressing the legality of such actions, the problem of identifying which channel contains which conversation becomes much simpler when the first thing intercepted is thesignaling channelthat carries information to set up telephone calls. In civilian and many military use, this channel will carry messages inSignaling System 7protocols. Retrospective analysis of telephone calls can be made fromCall detail record(CDR) used for billing the calls. More a part ofcommunications securitythan true intelligence collection, SIGINT units still may have the responsibility of monitoring one's own communications or other electronic emissions, to avoid providing intelligence to the enemy. For example, a security monitor may hear an individual transmitting inappropriate information over an unencrypted radio network, or simply one that is not authorized for the type of information being given. If immediately calling attention to the violation would not create an even greater security risk, the monitor will call out one of the BEADWINDOW codes[25]used by Australia, Canada, New Zealand, the United Kingdom, the United States, and other nations working under their procedures. Standard BEADWINDOW codes (e.g., "BEADWINDOW 2") include: In WWII, for example, the Japanese Navy, by poor practice, identified a key person's movement over a low-security cryptosystem. This made possibleOperation Vengeance, the interception and death of the Combined Fleet commander, AdmiralIsoroku Yamamoto. Electronic signals intelligence (ELINT) refers tointelligence-gatheringby use of electronic sensors. Its primary focus lies onnon-communications signalsintelligence. The Joint Chiefs of Staff define it as "Technical and geolocation intelligence derived from foreign noncommunications electromagnetic radiations emanating from sources other than nuclear detonations or radioactive sources."[18] Signal identification is performed by analyzing the collected parameters of a specific signal, and either matching it to known criteria, or recording it as a possible new emitter. ELINT data are usually highly classified, and are protected as such. The data gathered are typically pertinent to the electronics of an opponent's defense network, especially the electronic parts such asradars,surface-to-air missilesystems, aircraft, etc. ELINT can be used to detect ships and aircraft by their radar and other electromagnetic radiation; commanders have to make choices between not using radar (EMCON), intermittently using it, or using it and expecting to avoid defenses. ELINT can be collected from ground stations near the opponent's territory, ships off their coast, aircraft near or in their airspace, or by satellite. Combining other sources of information and ELINT allows traffic analysis to be performed on electronic emissions which contain human encoded messages. The method of analysis differs from SIGINT in that any human encoded message which is in the electronic transmission is not analyzed during ELINT. What is of interest is the type of electronic transmission and its location. For example, during theBattle of the AtlanticinWorld War II,UltraCOMINT was not always available becauseBletchley Parkwas not always able to read theU-boatEnigmatraffic. Buthigh-frequency direction finding("huff-duff") was still able to detect U-boats by analysis of radio transmissions and the positions through triangulation from the direction located by two or more huff-duff systems. TheAdmiraltywas able to use this information to plot courses which took convoys away from high concentrations of U-boats. Other ELINT disciplines include intercepting and analyzing enemy weapons control signals, or theidentification, friend or foeresponses from transponders in aircraft used to distinguish enemy craft from friendly ones. A very common area of ELINT is intercepting radars and learning their locations and operating procedures. Attacking forces may be able to avoid the coverage of certain radars, or, knowing their characteristics,electronic warfareunits may jam radars or send them deceptive signals. Confusing a radar electronically is called a "soft kill", but military units will also send specialized missiles at radars, or bomb them, to get a "hard kill". Some modern air-to-air missiles also have radar homing guidance systems, particularly for use against large airborne radars. Knowing where each surface-to-air missile andanti-aircraft artillerysystem is and its type means that air raids can be plotted to avoid the most heavily defended areas and to fly on a flight profile which will give the aircraft the best chance of evading ground fire and fighter patrols. It also allows for thejammingorspoofingof the enemy's defense network (seeelectronic warfare). Good electronic intelligence can be very important to stealth operations;stealth aircraftare not totally undetectable and need to know which areas to avoid. Similarly, conventional aircraft need to know where fixed or semi-mobileair defensesystems are so that they can shut them down or fly around them. Electronic support measures (ESM)orelectronic surveillance measuresare ELINT techniques using variouselectronic surveillance systems, but the term is used in the specific context of tactical warfare. ESM give the information needed forelectronic attack (EA)such as jamming, or directional bearings (compass angle) to a target insignals interceptsuch as in thehuff-duffradio direction finding (RDF) systems so critically important during the World War IIBattle of the Atlantic. After WWII, the RDF, originally applied only in communications, was broadened into systems to also take in ELINT from radar bandwidths and lower frequency communications systems, giving birth to a family of NATO ESM systems, such as the shipboard USAN/WLR-1[26]—AN/WLR-6systems and comparable airborne units. EA is also calledelectronic counter-measures (ECM). ESM provides information needed forelectronic counter-counter measures (ECCM), such as understanding a spoofing or jamming mode so one can change one's radar characteristics to avoid them. Meaconing[27]is the combined intelligence and electronic warfare of learning the characteristics of enemy navigation aids, such as radio beacons, and retransmitting them with incorrect information. FISINT (Foreign instrumentation signals intelligence) is a sub-category of SIGINT, monitoring primarily non-human communication. Foreign instrumentation signals include (but not limited to)telemetry(TELINT), tracking systems, and video data links. TELINT is an important part ofnational means of technical verificationfor arms control. Still at the research level are techniques that can only be described ascounter-ELINT, which would be part of aSEADcampaign. It may be informative to compare and contrast counter-ELINT withECCM. Signals intelligence and measurement and signature intelligence (MASINT) are closely, and sometimes confusingly, related.[28]The signals intelligence disciplines of communications and electronic intelligence focus on the information in those signals themselves, as with COMINT detecting the speech in a voice communication or ELINT measuring thefrequency, pulse repetition rate, and other characteristicsof a radar. MASINT also works with collected signals, but is more of an analysis discipline. There are, however, unique MASINT sensors, typically working in different regions or domains of the electromagnetic spectrum, such as infrared or magnetic fields. While NSA and other agencies have MASINT groups, the Central MASINT Office is in theDefense Intelligence Agency(DIA). Where COMINT and ELINT focus on the intentionally transmitted part of the signal, MASINT focuses on unintentionally transmitted information. For example, a given radar antenna will havesidelobesemanating from a direction other than that in which the main antenna is aimed. The RADINT (radar intelligence) discipline involves learning to recognize a radar both by its primary signal, captured by ELINT, and its sidelobes, perhaps captured by the main ELINT sensor, or, more likely, a sensor aimed at the sides of the radio antenna. MASINT associated with COMINT might involve the detection of common background sounds expected with human voice communications. For example, if a given radio signal comes from a radio used in a tank, if the interceptor does not hear engine noise or higher voice frequency than the voicemodulationusually uses, even though the voice conversation is meaningful, MASINT might suggest it is a deception, not coming from a real tank. SeeHF/DFfor a discussion of SIGINT-captured information with a MASINT flavor, such as determining the frequency to which areceiveris tuned, from detecting the frequency of thebeat frequency oscillatorof thesuperheterodyne receiver. Since the invention of the radio, the international consensus has been that the radio-waves are no one's property, and thus the interception itself is not illegal.[29]There can, however, be national laws on who is allowed to collect, store, and process radio traffic, and for what purposes. Monitoring traffic in cables (i.e. telephone and Internet) is far more controversial, since it most of the time requires physical access to the cable and thereby violating ownership and expected privacy.[citation needed]
https://en.wikipedia.org/wiki/Signals_intelligence
Acrash-only softwareis acomputer programthat handle failures by simply restarting, without attempting any sophisticated recovery.[1]Correctly written components of crash-only software canmicrorebootto a known-good state without the help of a user. Since failure-handling and normal startup use the same methods, this can increase the chance that bugs in failure-handling code will be noticed,[clarification needed]except when there are leftover artifacts, such asdata corruptionfrom a severe failure, that don't occur during normal startup.[citation needed] Crash-only software also has benefits for end-users. All too often, applications do not save their data and settings while running, only at the end of their use. For example,word processorsusually save settings when they are closed. A crash-only application is designed to save all changed user settings soon after they are changed, so that thepersistent statematches that of the running machine. No matter how an application terminates (be it a clean close or the sudden failure of a laptop battery), the state will persist. Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Crash-only_software
Asafety-critical system[2]orlife-critical systemis a system whose failure or malfunction may result in one (or more) of the following outcomes:[3][4] Asafety-related system(or sometimessafety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved.[5]Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severeenvironmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems orhuman error. Some safety organizations provide guidance on safety-related systems, for example theHealth and Safety Executivein theUnited Kingdom.[6] Risks of this sort are usually managed with the methods and tools ofsafety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation.[7][8]Typical design methods includeprobabilistic risk assessment, a method that combinesfailure mode and effects analysis (FMEA)withfault tree analysis. Safety-critical systems are increasinglycomputer-based. Safety-critical systems are a concept often used together with theSwiss cheese modelto represent (usually in abow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain ofprocess safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such asasset integrity managementandincident investigation.[9] Several reliability regimes for safety-critical systems exist: Software engineeringfor safety-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as Federal Aviation Administration requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. Theavionicsindustry has succeeded in producingstandard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, acompiler, and then generate the system's code from specifications. Another approach usesformal methodsto generateproofsthat the code meets requirements.[12]All of these approaches improve thesoftware qualityin safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors. The technology requirements can go beyond avoidance of failure, and can even facilitate medicalintensive care(which deals with healing patients), and alsolife support(which is for stabilizing patients). Archived2020-07-15 at theWayback Machine
https://en.wikipedia.org/wiki/Safety-critical_system
PGP Virtual Diskis adisk encryptionsystem that allows one to create a virtualencrypted diskwithin a file. Older versions for Windows NT were freeware (for example, bundled withPGPv6.0.2i; and with some of the CKT builds of PGP). These are still available for download, but no longer maintained. Today, PGP Virtual Disk is available as part of thePGP Desktopproduct family, running onWindows 2000/XP/Vista, andMac OS X. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/PGPDisk
Low Orbit Ion Cannon(LOIC) is anopen-sourcenetworkstress testinganddenial-of-service attackapplication written inC#. LOIC was initially developed by Praetox Technologies, however it was later released into thepublic domain[2]and is currently available on several open-source platforms.[3][4] LOIC performs aDoS attack(or, when used by multiple individuals, aDDoS attack) on a target site by flooding the server withTCP,UDP, or HTTP packets with the intention of disrupting the service of a particular host. People have used LOIC to joinvoluntary botnets.[5] The software inspired the creation of an independentJavaScriptversion calledJS LOIC, as well as a LOIC-derived web version calledLow Orbit Web Cannon. These enable a DoS from aweb browser.[6][7][8] Security experts quoted by the BBC indicated that well-writtenfirewallrules can filter out most traffic from DDoS attacks by LOIC, thus preventing the attacks from being fully effective.[9]In at least one instance, filtering out allUDPandICMPtraffic blocked a LOIC attack.[10]Firewall rules of this sort are more likely to be effective when implemented at a point upstream of an application server's Internet uplink to avoid the uplink from exceeding its capacity.[10] LOIC attacks are easily identified in system logs, and the attack can be tracked down to the IP addresses used.[11] LOIC was used byAnonymous(a group that spawned from the/b/ board of 4chan) duringProject Chanologyto attack websites from the Church ofScientology, once more to (successfully) attack theRecording Industry Association of America's website in October 2010,[12]and it was again used byAnonymousduring theirOperation Paybackin December 2010 to attack the websites of companies and organizations that opposedWikiLeaks.[13][14] In retaliation for the shutdown of the file sharing serviceMegauploadand the arrest of four workers, members of Anonymous launched a DDoS attack upon the websites ofUniversal Music Group(the company responsible for the lawsuit against Megaupload), theUnited States Department of Justice, theUnited States Copyright Office, theFederal Bureau of Investigation, theMPAA,Warner Music Groupand theRIAA, as well as theHADOPI, all on the afternoon of January 19, 2012, through LOIC.[15]In general, the attack hoped to retaliate against those who Anonymous members believed harmed their digital freedoms.[16] The LOIC application is named after theion cannon, a fictional weapon from many sci-fi works, video games,[17]and in particular after its namesake from theCommand & Conquerseries.[18]The artwork used in the application was a concept art forCommand & Conquer 3: Tiberium Wars. The song "Low Orbit Ion Cannon" onEmperor X's 2017 albumOversleepers Internationaldirectly references the software.
https://en.wikipedia.org/wiki/Low_Orbit_Ion_Cannon
NeuroEvolution of Augmenting Topologies(NEAT) is agenetic algorithm(GA) for generating evolvingartificial neural networks(aneuroevolutiontechnique) developed byKenneth StanleyandRisto Miikkulainenin 2002 while atThe University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation (the evolution of species) to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying"). On simple control tasks, the NEAT algorithm often arrives at effective networks more quickly than other contemporary neuro-evolutionary techniques andreinforcement learningmethods, as of 2006.[1][2] Traditionally, a neural network topology is chosen by a human experimenter, and effective connection weight values are learned through a training procedure. This yields a situation whereby a trial and error process may be necessary in order to determine an appropriate topology. NEAT is an example of a topology and weight evolving artificial neural network (TWEANN) which attempts to simultaneously learn weight values and an appropriate topology for a neural network. In order to encode the network into a phenotype for the GA, NEAT uses a direct encoding scheme which means every connection and neuron is explicitly represented. This is in contrast to indirect encoding schemes which define rules that allow the network to be constructed without explicitly representing every connection and neuron, allowing for more compact representation. The NEAT approach begins with aperceptron-like feed-forward network of only input neurons and output neurons. As evolution progresses through discrete steps, the complexity of the network's topology may grow, either by inserting a new neuron into a connection path, or by creating a new connection between (formerly unconnected) neurons. The competing conventions problem arises when there is more than one way of representing information in a phenotype. For example, if a genome contains neuronsA,BandCand is represented by [A B C], if this genome is crossed with an identical genome (in terms of functionality) but ordered [C B A] crossover will yield children that are missing information ([A B A] or [C B C]), in fact 1/3 of the information has been lost in this example. NEAT solves this problem by tracking the history of genes by the use of a global innovation number which increases as new genes are added. When adding a new gene the global innovation number is incremented and assigned to that gene. Thus the higher the number the more recently the gene was added. For a particular generation if an identical mutation occurs in more than one genome they are both given the same number, beyond that however the mutation number will remain unchanged indefinitely. These innovation numbers allow NEAT to match up genes which can be crossed with each other.[1] The original implementation by Ken Stanley is published under theGPL. It integrates withGuile, a GNUschemeinterpreter. This implementation of NEAT is considered the conventional basic starting point for implementations of the NEAT algorithm. In 2003, Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through the iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires, its current fitness measure is examined to see whether it falls near the bottom of the population, and if so, it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations. The first application of rtNEAT is a video game called Neuro-Evolving Robotic Operatives, or NERO. In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine. Once a collection of robots has been trained, a second phase of play allows players to pit their robots in a battle against robots trained by some other player, to see how well their training regimens prepared their robots for battle. An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure. HyperNEATis specialized to evolve large scale structures. It was originally based on theCPPNtheory and is an active field of research. Content-Generating NEAT (cgNEAT) evolves custom video game content based on user preferences. The first video game to implement cgNEAT isGalactic Arms Race, a space-shooter game in which unique particle system weapons are evolved based on player usage statistics.[3]Each particle system weapon in the game is controlled by an evolvedCPPN, similarly to the evolution technique in theNEAT Particlesinteractive art program. odNEAT is an online and decentralized version of NEAT designed for multi-robot systems.[4]odNEAT is executed onboard robots themselves during task execution to continuously optimize the parameters and the topology of the artificial neural network-based controllers. In this way, robots executing odNEAT have the potential to adapt to changing conditions and learn new behaviors as they carry out their tasks. The online evolutionary process is implemented according to a physically distributed island model. Each robot optimizes an internal population of candidate solutions (intra-island variation), and two or more robots exchange candidate solutions when they meet (inter-island migration). In this way, each robot is potentially self-sufficient and the evolutionary process capitalizes on the exchange of controllers between multiple robots for faster synthesis of effective controllers.
https://en.wikipedia.org/wiki/NeuroEvolution_of_Augmenting_Topologies