text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Afat-tailed distributionis aprobability distributionthat exhibits a largeskewnessorkurtosis, relative to that of either anormal distributionor anexponential distribution.[when defined as?]In common usage, the terms fat-tailed andheavy-tailedare sometimes synonymous; fat-tailed is sometimes also defined as a subset of heavy-tailed. Different research communities favor one or the other largely for historical reasons, and may have differences in the precise definition of either.
Fat-tailed distributions have been empirically encountered in a variety of areas:physics, earth sciences, economics and political science. The class of fat-tailed distributions includes those whose tails decay like apower law, which is a common point of reference in their use in the scientific literature. However, fat-tailed distributions also include other slowly-decaying distributions, such as thelog-normal.[1]
The most extreme case of a fat tail is given by a distribution whose tail decays like apower law.
That is, if the complementarycumulative distributionof arandom variableXcan be expressed as[citation needed]
then the distribution is said to have a fat tail ifα<2{\displaystyle \alpha <2}. For such values the variance and the skewness of the tail are mathematically undefined (a special property of the power-law distribution), and hence larger than any normal or exponential distribution. For values ofα>2,{\displaystyle \ \alpha >2\ ,}the claim of a fat tail is more ambiguous, because in this parameter range, the variance, skewness, and kurtosis can be finite, depending on the precise value ofα,{\displaystyle \ \alpha \ ,}and thus potentially smaller than a high-variance normal or exponential tail. This ambiguity often leads to disagreements about precisely what is, or is not, a fat-tailed distribution. Fork>α−1,{\displaystyle \ k>\alpha -1\ ,}thekth{\displaystyle \ k^{\mathsf {th}}\ }moment is infinite, so for every power law distribution, some moments are undefined.[2]
Compared to fat-tailed distributions, in the normal distribution, events that deviate from themeanby five or morestandard deviations("5-sigma events") have lower probability, meaning that in the normal distribution extreme events are less likely than for fat-tailed distributions. Fat-tailed distributions such as theCauchy distribution(and all otherstable distributionswith the exception of thenormal distribution) have "undefined sigma" (more technically, thevarianceis undefined).
As a consequence, when data arise from an underlying fat-tailed distribution, shoehorning in the "normal distribution" model of risk—and estimating sigma based (necessarily) on a finite sample size—would understate the true degree of predictive difficulty (and of risk). Many—notablyBenoît Mandelbrotas well asNassim Taleb—have noted this shortcoming of the normal distribution model and have proposed that fat-tailed distributions such as thestable distributionsgovern asset returns frequently found infinance.[3][4][5]
TheBlack–Scholesmodel ofoption pricingis based on a normal distribution. If the distribution is actually a fat-tailed one, then the model will under-priceoptionsthat are farout of the money, since a 5- or 7-sigma event is much more likely than the normal distribution would predict.[6]
Infinance, fat tails often occur but are considered undesirable because of the additionalriskthey imply. For example, an investment strategy may have an expected return, after one year, that is five times its standard deviation. Assuming a normal distribution, the likelihood of its failure (negative return) is less than one in a million; in practice, it may be higher. Normal distributions that emerge in finance generally do so because the factors influencing an asset's value or price are mathematically "well-behaved", and thecentral limit theoremprovides for such a distribution. However, traumatic "real-world" events (such as an oil shock, a large corporate bankruptcy, or an abrupt change in a political situation) are usually not mathematicallywell-behaved.
Historical examples include theWall Street crash of 1929,Black Monday (1987),Dot-com bubble,2008 financial crisis,2010 flash crash, the2020 stock market crashand the unpegging of some currencies.[7]
Fat tails in market return distributions also have some behavioral origins (investor excessive optimism or pessimism leading to large market moves) and are therefore studied inbehavioral finance.
Inmarketing, the familiar80-20 rulefrequently found (e.g. "20% of customers account for 80% of the revenue") is a manifestation of a fat tail distribution underlying the data.[8]
The "fat tails" are also observed incommodity marketsor in therecord industry, especially inphonographic markets. The probability density function for logarithm of weekly record sales changes is highlyleptokurticand characterized by a narrower and larger maximum, and by a fatter tail than in the normal distribution case. On the other hand, this distribution has only one fat tail associated with an increase in sales due to promotion of the new records that enter the charts.[9]
|
https://en.wikipedia.org/wiki/Fat-tailed_distribution
|
Instatistics,datatransformationis the application of adeterministicmathematicalfunctionto each point in adataset—that is, each data pointziis replaced with the transformed valueyi=f(zi), wherefis a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of astatistical inferenceprocedure that is to be applied, or to improve the interpretability or appearance ofgraphs.
Nearly always, the function that is used to transform the data isinvertible, and generally iscontinuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in somecurrencyunit, it would be common to transform each person's income value by thelogarithmfunction.
Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95%confidence intervalfor the population mean is to take thesample meanplus or minus twostandard errorunits. However, the constant factor 2 used here is particular to thenormal distribution, and is only applicable if the sample mean varies approximately normally. Thecentral limit theoremstates that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if thepopulationis substantiallyskewedand the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrongcoverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to asymmetricdistribution[1]before constructing a confidence interval. If desired, the confidence interval for the quantiles (such as the median) can then be transformed back to the original scale using the inverse of the transformation that was applied to the data.[2][3]
Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, followinglogarithmictransformations of both area and population, the points will be spread more uniformly in the graph.
Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying thereciprocal function, yielding liters per kilometer, or gallons per mile.
Data transformation may be used as a remedial measure to make data suitable for modeling withlinear regressionif the original data violates one or more assumptions of linear regression.[4]For example, the simplest linear regression models assume alinearrelationship between theexpected valueofY(theresponse variableto be predicted) and eachindependent variable(when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity.[5]For example, addition of quadratic functions of the original independent variables may lead to a linear relationship withexpected valueofY,resulting in apolynomial regressionmodel, a special case of linear regression.
Another assumption of linear regression ishomoscedasticity, that is thevarianceoferrorsmust be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data isheteroscedastic), it may be possible to find a transformation ofYalone, or transformations of bothX(thepredictor variables) andY, such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables[5]and linear regression may therefore be applied on these.
Yet another application of data transformation is to address the problem of lack ofnormalityin error terms. Univariate normality is not needed forleast squaresestimates of the regression parameters to be meaningful (seeGauss–Markov theorem). However confidence intervals andhypothesis testswill have better statistical properties if the variables exhibitmultivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedasticity) often also help make the error terms approximately normal.[5][6]
Equation:Y=a+bX{\displaystyle Y=a+bX}
Equation:log(Y)=a+bX{\displaystyle \log(Y)=a+bX}
Equation:Y=a+blog(X){\displaystyle Y=a+b\log(X)}
Equation:log(Y)=a+blog(X){\displaystyle \log(Y)=a+b\log(X)}
Generalized linear models(GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value.[8][9]
Thelogarithmtransformationandsquare roottransformationare commonly used for positive data, and themultiplicative inversetransformation(reciprocal transformation) can be used for non-zero data. Thepower transformationis a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to usestatistical estimationtechniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as theBox–Cox transformation.
The reciprocal transformation, some power transformations such as theYeo–Johnson transformation, and certain other transformations such as applying theinverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values[10](the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied.[3]
A common situation where a data transformation is applied is when a value of interest ranges over severalorders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes".
The logarithm also has a useful effect on ratios. If we are comparing positive quantitiesXandYusing the ratioX/Y, then ifX<Y, the ratio is in the interval (0,1), whereas ifX>Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis whereXandYare treated symmetrically, the log-ratio log(X/Y) is zero in the case of equality, and it has the property that ifXisKtimes greater thanY, the log-ratio is the equidistant from zero as in the situation whereYisKtimes greater thanX(the log-ratios are log(K) and −log(K) in these two situations).
If values are naturally restricted to be in the range 0 to 1, not including the end-points, then alogit transformationmay be appropriate: this yields values in the range (−∞,∞).
1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations.
2. A linguistic power function is distributed according to theZipf-Mandelbrot law. The distribution is extremely spiky andleptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g.authorship attributionproblems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation.[11]
3. To assess whether normality has been achieved after transformation, any of the standardnormality testsmay be used. A graphical approach is usually more informative than a formal statistical test and hence anormal quantile plotis commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sampleskewnessandkurtosishave also been proposed.[12][13]
If we observe a set ofnvaluesX1, ...,Xnwith no ties (i.e., there arendistinct values), we can replaceXiwith the transformed valueYi=k, wherekis defined such thatXiis thekthlargest among all theXvalues. This is called therank transform,[14]and creates data with a perfect fit to auniform distribution. This approach has apopulationanalogue.
Using theprobability integral transform, ifXis anyrandom variable, andFis thecumulative distribution functionofX, then as long asFis invertible, the random variableU=F(X) follows a uniform distribution on theunit interval[0,1].
From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. IfGis an invertible cumulative distribution function, andUis a uniformly distributed random variable, then the random variableG−1(U) hasGas its cumulative distribution function.
Putting the two together, ifXis any random variable,Fis the invertible cumulative distribution function ofX, andGis an invertible cumulative distribution function then the random variableG−1(F(X)) hasGas its cumulative distribution function.
Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with differentexpected values. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances.
Avariance-stabilizing transformationaims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are theFisher transformationfor the sample correlation coefficient, thesquare roottransformation orAnscombe transformforPoissondata (count data), theBox–Cox transformationfor regression analysis, and thearcsine square root transformationor angular transformation for proportions (binomialdata). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended becauselogistic regressionor alogit transformationare more appropriate for binomial or non-binomial proportions, respectively, especially due to decreasedtype-II error.[15][3]
Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working withtime seriesand other types of sequential data, it is common todifferencethe data to improvestationarity. If data generated by a random vectorXare observed as vectorsXiof observations withcovariance matrixΣ, alinear transformationcan be used todecorrelatethe data. To do this, theCholesky decompositionis used to express Σ =AA'. Then the transformed vectorYi=A−1Xihas theidentity matrixas its covariance matrix.
|
https://en.wikipedia.org/wiki/Data_transformation_(statistics)
|
Incryptography,key wrapconstructions are a class ofsymmetric encryptionalgorithms designed toencapsulate(encrypt) cryptographic key material.[1]The Key Wrap algorithms are intended for applications such as protecting keys while in untrusted storage or transmitting keys over untrusted communications networks. The constructions are typically built from standard primitives such asblock ciphersandcryptographic hash functions.
Key Wrap may be considered as a form ofkey encapsulationalgorithm, although it should not be confused with the more commonly knownasymmetric(public-key)key encapsulationalgorithms (e.g.,PSEC-KEM). Key Wrap algorithms can be used in a similar application: to securely transport a session key by encrypting it under a long-term encryption key.
In the late 1990s, theNational Institute of Standards and Technology(NIST) posed the "Key Wrap" problem: to develop secure and efficient cipher-based key encryption algorithms. The resulting algorithms would be formally evaluated by NIST, and eventually approved for use in NIST-certified cryptographic modules. NIST did not precisely define the security goals of the resulting algorithm, and left further refinement to the algorithm developers. Based on the resulting algorithms, the design requirements appear to be (1) confidentiality, (2) integrity protection (authentication), (3) efficiency, (4) use of standard (approved) underlying primitives such as theAdvanced Encryption Standard(AES) and the Secure Hash Algorithm (SHA-1), and (5) consideration of additional circumstances (e.g., resilience to operator error, low-quality random number generators). Goals (3) and (5) are particularly important, given that many widely deployedauthenticated encryptionalgorithms (e.g., AES-CCM) are already sufficient to accomplish the remaining goals.
Several constructions have been proposed. These include:
Each of the proposed algorithms can be considered as a form ofauthenticated encryptionalgorithm providing confidentiality for highlyentropicmessages such as cryptographic keys. The AES Key Wrap Specification, AESKW, TDKW, and AKW1 are intended to maintain confidentiality underadaptive chosen ciphertext attacks, while the AKW2 algorithm is designed to be secure only under known-plaintext (or weaker) attacks. (The stated goal of AKW2 is for use in legacy systems and computationally limited devices where use of the other algorithms would be impractical.) AESKW, TDKW and AKW2 also provide the ability to authenticate cleartext "header", an associated block of data that is not encrypted.
Rogawayand Shrimpton evaluated the design of the ANSX9.102 algorithms with respect to the stated security goals. Among their general findings, they noted the lack of clearly stated design goals for the algorithms, and the absence of security proofs for all constructions.
In their paper,Rogawayand Shrimpton proposed a provable key-wrapping algorithm (SIV—the Synthetic Initialization Vector mode) that authenticates and encrypts an arbitrary string and authenticates,
but does not encrypt,associated datawhich can be bound into the wrapped key. This has been standardized as a
new AES mode inRFC5297.
|
https://en.wikipedia.org/wiki/Key_Wrap
|
Aphase-type distributionis aprobability distributionconstructed by a convolution or mixture ofexponential distributions.[1]It results from a system of one or more inter-relatedPoisson processesoccurring insequence, or phases. The sequence in which each of the phases occurs may itself be astochastic process. The distribution can be represented by arandom variabledescribing the time until absorption of aMarkov processwith one absorbing state. Each of thestatesof the Markov process represents one of the phases.
It has adiscrete-timeequivalent – thediscrete phase-type distribution.
The set of phase-type distributions is dense in the field of all positive-valued distributions, that is, it can be used to approximate any positive-valued distribution.
Consider acontinuous-time Markov processwithm+ 1 states, wherem≥ 1, such that the states 1,...,mare transient states and state 0 is an absorbing state. Further, let the process have an initial probability of starting in any of them+ 1 phases given by the probability vector (α0,α) whereα0is a scalar andαis a 1 ×mvector.
Thecontinuous phase-type distributionis the distribution of time from the above process's starting until absorption in the absorbing state.
This process can be written in the form of atransition rate matrix,
whereSis anm×mmatrix andS0= –S1. Here1represents anm× 1 column vector with every element being 1.
The distribution of timeXuntil the process reaches the absorbing state is said to be phase-type distributed and is denoted PH(α,S).
The distribution function ofXis given by,
and the density function,
for allx> 0, where exp( · ) is thematrix exponential. It is usually assumed the probability of process starting in the absorbing state is zero (i.e. α0= 0). The moments of the distribution function are given by
TheLaplace transformof the phase type distribution is given by
whereIis the identity matrix.
The following probability distributions are all considered special cases of a continuous phase-type distribution:
As the phase-type distribution is dense in the field of all positive-valued distributions, we can represent any positive valued distribution. However, the phase-type is a light-tailed or platykurtic distribution. So the representation of heavy-tailed or leptokurtic distribution by phase type is an approximation, even if the precision of the approximation can be as good as we want.
In all the following examples it is assumed that there is no probability mass at zero, that is α0= 0.
The simplest non-trivial example of a phase-type distribution is the exponential distribution of parameter λ. The parameter of the phase-type distribution are :S= -λ andα= 1.
The mixture of exponential orhyperexponential distributionwith λ1,λ2,...,λn>0 can be represented as a phase type distribution with
with∑i=1nαi=1{\displaystyle \sum _{i=1}^{n}\alpha _{i}=1}and
This mixture of densities of exponential distributed random variables can be characterized through
or its cumulative distribution function
withXi∼Exp(λi){\displaystyle X_{i}\sim Exp(\lambda _{i})}
The Erlang distribution has two parameters, the shape an integerk> 0 and the rate λ > 0. This is sometimes denotedE(k,λ). The Erlang distribution can be written in the form of a phase-type distribution by makingSak×kmatrix with diagonal elements -λ and super-diagonal elements λ, with the probability of starting in state 1 equal to 1. For example,E(5,λ),
and
For a given number of phases, the Erlang distribution is the phase type distribution with smallest coefficient of variation.[2]
Thehypoexponential distributionis a generalisation of the Erlang distribution by having different rates for each transition (the non-homogeneous case).
The mixture of two Erlang distributions with parameterE(3,β1),E(3,β2) and (α1,α2) (such that α1+ α2= 1 and for eachi, αi≥ 0) can be represented as a phase type distribution with
and
TheCoxian distributionis a generalisation of theErlang distribution. Instead of only being able to enter the absorbing state from statekit can be reached from any phase. The phase-type representation is given by,
and
where 0 <p1,...,pk-1≤ 1. In the case where allpi= 1 we have the Erlang distribution. The Coxian distribution is extremely important as any acyclic phase-type distribution has an equivalent Coxian representation.
Thegeneralised Coxian distributionrelaxes the condition that requires starting in the first phase.
Similarly to theexponential distribution, the class of PH distributions is closed under minima of independent random variables. A description of this ishere.
BuToolsincludes methods for generating samples from phase-type distributed random variables.[3]
Any distribution can be arbitrarily well approximated by a phase type distribution.[4][5]In practice, however, approximations can be poor when the size of the approximating process is fixed. Approximating a deterministic distribution of time 1 with 10 phases, each of average length 0.1 will have variance 0.1 (because the Erlang distribution has smallest variance[2]).
Methods to fit a phase type distribution to data can be classified as maximum likelihood methods or moment matching methods.[8]Fitting a phase type distribution toheavy-tailed distributionshas been shown to be practical in some situations.[9]
|
https://en.wikipedia.org/wiki/Phase-type_distribution
|
In the theory ofalgebras over a field,mutationis a construction of a newbinary operationrelated to the multiplication of the algebra. In specific cases the resulting algebra may be referred to as ahomotopeor anisotopeof the original.
LetAbe an algebra over afieldFwith multiplication (not assumed to beassociative) denoted by juxtaposition. For an elementaofA, define thelefta-homotopeA(a){\displaystyle A(a)}to be the algebra with multiplication
Similarly define theleft (a,b) mutationA(a,b){\displaystyle A(a,b)}
Right homotope and mutation are defined analogously. Since the right (p,q) mutation ofAis the left (−q, −p) mutation of theopposite algebratoA, it suffices to study left mutations.[1]
IfAis aunital algebraandais invertible, we refer to theisotopebya.
AJordan algebrais a commutative algebra satisfying theJordan identity(xy)(xx)=x(y(xx)){\displaystyle (xy)(xx)=x(y(xx))}. TheJordan triple productis defined by
ForyinAthemutation[3]orhomotope[4]Ayis defined as the vector spaceAwith multiplication
and ifyis invertible this is referred to as anisotope. A homotope of a Jordan algebra is again a Jordan algebra: isotopy defines an equivalence relation.[5]Ifyisnuclearthen the isotope byyis isomorphic to the original.[6]
|
https://en.wikipedia.org/wiki/Mutation_(algebra)
|
TheCXFS file system(ClusteredXFS) is aproprietaryshared disk file systemdesigned bySilicon Graphics(SGI) specifically to be used in astorage area network(SAN) environment.
A significant difference between CXFS and other shared disk file systems is that data andmetadataare managed separately from each other. CXFS provides direct access to data via the SAN for all hosts which will act as clients. This means that a client is able to access file data via the fiber connection to the SAN, rather than over alocal area networksuch asEthernet(as is the case in most other distributed file systems, likeNFS). File metadata however, is managed via ametadata broker. The metadata communication is performed via TCP/IP and Ethernet.
Another difference is that file locks are managed by the metadata broker, rather than the individual host clients. This results in the elimination of a number of problems which typically plague distributed file systems.
Though CXFS supports having a heterogeneous environment (includingSolaris,Linux,Mac OS X,AIXandWindows), either SGI'sIRIXOperating System orLinuxis required to be installed on the host which acts as the metadata broker.
|
https://en.wikipedia.org/wiki/CXFS
|
Indigital communications, aturbo equalizeris a type ofreceiverused to receive a message corrupted by acommunication channelwithintersymbol interference(ISI). It approaches the performance of amaximum a posteriori(MAP) receiver via iterativemessage passingbetween asoft-in soft-out(SISO)equalizerand a SISO decoder.[1]It is related toturbo codesin that a turbo equalizer may be considered a type of iterative decoder if the channel is viewed as a non-redundantconvolutional code. The turbo equalizer is different from classic a turbo-like code, however, in that the 'channel code' adds no redundancy and therefore can only be used to remove non-gaussian noise.
Turbo codeswere invented byClaude Berrouin 1990–1991. In 1993,turbo codeswere introduced publicly via a paper listing authorsBerrou,Glavieux, andThitimajshima.[2]In 1995 a novel extension of the turbo principle was applied to an equalizer byDouillard,Jézéquel, andBerrou.[3]In particular, they formulated the ISI receiver problem as a turbo code decoding problem, where the channel is thought of as a rate 1 convolutional code and the error correction coding is the second code. In 1997,Glavieux,Laot, andLabatdemonstrated that a linear equalizer could be used in a turbo equalizer framework.[4]This discovery made turbo equalization computationally efficient enough to be applied to a wide range of applications.[5]
Before discussing turbo equalizers, it is necessary to understand the basic receiver in the context of a communication system. This is the topic of this section.
At thetransmitter, informationbitsareencoded. Encoding adds redundancy by mapping the information bitsa{\displaystyle a}to a longer bit vector – the code bit vectorb{\displaystyle b}. The encoded bitsb{\displaystyle b}are theninterleaved. Interleaving permutes the order of the code bitsb{\displaystyle b}resulting in bitsc{\displaystyle c}. The main reason for doing this is to insulate the information bits from bursty noise. Next, the symbol mapper maps the bitsc{\displaystyle c}intocomplex symbolsx{\displaystyle x}. These digital symbols are then converted into analog symbols with aD/A converter. Typically the signal is thenup-convertedto pass band frequencies by mixing it with acarriersignal. This is a necessary step for complex symbols. The signal is then ready to be transmitted through thechannel.
At thereceiver, the operations performed by the transmitter are reversed to recovera^{\displaystyle {\hat {a}}}, an estimate of the information bits. Thedown-convertermixes the signal back down to baseband. TheA/D converterthen samples the analog signal, making it digital. At this point,y{\displaystyle y}is recovered. The signaly{\displaystyle y}is what would be received ifx{\displaystyle x}were transmitted through the digitalbasebandequivalent of the channel plusnoise. The signal is thenequalized. The equalizer attempts to unravel theISIin the received signal to recover the transmitted symbols. It then outputs the bitsc^{\displaystyle {\hat {c}}}associated with those symbols. The vectorc^{\displaystyle {\hat {c}}}may represent hard decisions on the bits or soft decisions. If the equalizer makes soft decisions, it outputs information relating to the probability of the bit being a 0 or a 1. If the equalizer makes hard decisions on the bits, it quantizes the soft bit decisions and outputs either a 0 or a 1. Next, the signal is deinterleaved which is a simple permutation transformation that undoes the transformation the interleaver executed. Finally, the bits are decoded by the decoder. The decoder estimatesa^{\displaystyle {\hat {a}}}fromb^{\displaystyle {\hat {b}}}.
A diagram of the communication system is shown below. In this diagram, the channel is the equivalent baseband channel, meaning that it encompasses the D/A, the up converter, the channel, the down converter, and the A/D.
The block diagram of a communication system employing a turbo equalizer is shown below. The turbo equalizer encompasses the equalizer, the decoder, and the blocks in between.
The difference between a turbo equalizer and a standard equalizer is the feedback loop from the decoder to the equalizer. Due to the structure of the code, the decoder not only estimates the information bitsa{\displaystyle a}, but it also discovers new information about the coded bitsb{\displaystyle b}. The decoder is therefore able to output extrinsic information,b~{\displaystyle {\tilde {b}}}about the likelihood that a certain code bit stream was transmitted. Extrinsic information is new information that is not derived from information input to the block. This extrinsic information is then mapped back into information about the transmitted symbolsx{\displaystyle x}for use in the equalizer. These extrinsic symbol likelihoods,x~{\displaystyle {\tilde {x}}}, are fed into the equalizer asa priorisymbol probabilities. The equalizer uses thisa prioriinformation as well as the input signaly{\displaystyle y}to estimate extrinsic probability information about the transmitted symbols. Thea prioriinformation fed to the equalizer is initialized to 0, meaning that the initial estimatea^{\displaystyle {\hat {a}}}made by the turbo equalizer is identical to the estimate made by the standard receiver. The informationx^{\displaystyle {\hat {x}}}is then mapped back into information aboutb{\displaystyle b}for use by the decoder. The turbo equalizer repeats this iterative process until a stopping criterion is reached.
In practical turbo equalization implementations, an additional issue need to be considered. Thechannel state information(CSI)that the equalizer operates on comes from some channel estimation technique, and hence un-reliable. Firstly, in order to improve the reliability of the CSI, it is desirable to include the channel estimation block also into the turbo equalization loop, and parse soft or hard decision directed channel estimation within each turbo equalization iteration.[6][7]Secondly, incorporating the presence of CSI uncertainty into the turbo equalizer design leads to a more robust approach with significant performance gains in practical scenarios.[8][9]
|
https://en.wikipedia.org/wiki/Turbo_equalizer
|
Random testingis a black-boxsoftware testingtechnique where programs are tested bygeneratingrandom, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail.[1]In case of absence of specifications the exceptions of the language are used which means if an exception arises during test execution then it means there is a fault in the program, it is also used as a way to avoid biased testing.
Random testing for hardware was first examined byMelvin Breuerin 1971 and initial effort to evaluate its effectiveness was done by Pratima andVishwani Agrawalin 1975.[2]
In software, Duran and Ntafos had examined random testing in 1984.[3]
The use of hypothesis testing as a theoretical basis for random testing was described by Howden inFunctional Testing and Analysis. The book also contained the development of a simple formula for estimating the number of testsnthat are needed to have confidence at least 1-1/nin a failure rate of no larger than 1/n. The formula is the lower boundnlogn, which indicates the large number of failure-free tests needed to have even modest confidence in a modest failure rate bound.[4]
Consider the following C++ function:
Now the random tests for this function could be {123, 36, -35, 48, 0}. Only the value '-35' triggers the bug. If there is no reference implementation to check the result, the bug still could go unnoticed. However, anassertioncould be added to check the results, like:
The reference implementation is sometimes available, e.g. when implementing a simple algorithm in a much more complex way for better performance. For example, to test an implementation of theSchönhage–Strassen algorithm, the standard "*" operation on integers can be used:
While this example is limited to simple types (for which a simple random generator can be used), tools targeting object-oriented languages typically explore the program to test and find generators (constructors or methods returning objects of that type) and call them using random inputs (either themselves generated the same way or generated using a pseudo-random generator if possible). Such approaches then maintain a pool of randomly generated objects and use a probability for either reusing a generated object or creating a new one.[5]
According to the seminal paper on random testing by D. Hamlet
[..] the technical, mathematical meaning of "random testing" refers to an explicit lack of "system" in the choice of test data, so that there is no correlation among different tests.[1]
Random testing is praised for the following strengths:
The following weaknesses have been described :
Some tools implementing random testing:
Random testing has only a specialized niche in practice, mostly because an effective oracle is seldom available, but also because of difficulties with the operational profile and with generation of pseudorandom input values.[1]
Atest oracleis an instrument for verifying whether the outcomes match the program specification or not. An operation profile is knowledge about usage patterns of the program and thus which parts are more important.
For programming languages and platforms which have contracts (e.g. Eiffel. .NET or various extensions of Java like JML, CoFoJa...) contracts act as natural oracles and the approach has been applied successfully.[5]In particular, random testing finds more bugs than manual inspections or user reports (albeit different ones).[9]
|
https://en.wikipedia.org/wiki/Random_testing
|
Sustainable agricultureisfarminginsustainableways meeting society's present food and textile needs, without compromising the ability for current or future generations to meet their needs.[1]It can be based on an understanding ofecosystem services. There are many methods to increase the sustainability of agriculture. When developing agriculture withinsustainable food systems, it is important to develop flexible business processes and farming practices.[2]Agriculture has an enormousenvironmental footprint, playing a significant rolein causing climate change(food systemsare responsible for one third of the anthropogenicgreenhouse gas emissions),[3][4]water scarcity,water pollution,land degradation,deforestationand other processes;[5]it is simultaneously causing environmental changes and being impacted by these changes.[6]Sustainable agriculture consists ofenvironment friendlymethods of farming that allow the production of crops or livestock without causing damage to human or natural systems. It involves preventing adverse effects on soil, water, biodiversity, and surrounding or downstream resources, as well as to those working or living on the farm or in neighboring areas. Elements of sustainable agriculture can includepermaculture,agroforestry,mixed farming,multiple cropping, andcrop rotation.[7]
Developing sustainable food systems contributes to the sustainability of the human population. For example, one of the best ways to mitigate climate change is to create sustainable food systems based on sustainable agriculture. Sustainable agriculture provides a potential solution to enable agricultural systems to feed a growing population within the changing environmental conditions.[6]Besides sustainable farming practices,dietary shifts to sustainable dietsare an intertwined way to substantially reduce environmental impacts.[8][9][10][11]Numeroussustainability standards and certificationsystems exist, includingorganic certification,Rainforest Alliance,Fair Trade,UTZ Certified,GlobalGAP, Bird Friendly, and the Common Code for the Coffee Community (4C).[12]
The term "sustainable agriculture" was defined in 1977 by theUSDAas an integrated system of plant and animal production practices having a site-specific application that will, over the long term:[13]
Yet the idea of having a sustainable relationship with the land has been prevalent in indigenous communities for centuries before the term was formally added to the lexicon.[14]
A common consensus is that sustainable farming is the most realistic way to feed growing populations. In order to successfully feed the population of the planet, farming practices must consider future costs–to both the environment and the communities they fuel.[15]The risk of not being able to provide enough resources for everyone led to the adoption of technology within the sustainability field to increase farm productivity. The ideal end result of this advancement is the ability to feed ever-growing populations across the world. The growing popularity of sustainable agriculture is connected to the wide-reaching fear that the planet'scarrying capacity(orplanetary boundaries), in terms of the ability to feed humanity, has been reached or even exceeded.[16]
There are several key principles associated with sustainability in agriculture:[17]
It "considers long-term as well as short-term economics because sustainability is readily defined as forever, that is, agricultural environments that are designed to promote endless regeneration".[18]It balances the need for resource conservation with the needs offarmerspursuing theirlivelihood.[19]
It is considered to bereconciliation ecology, accommodating biodiversity within human landscapes.[20]
Oftentimes, the execution of sustainable practices within farming comes through the adoption of technology and environmentally-focusedappropriate technology.
Sustainable agricultural systems are becoming an increasingly important field for AI research and development. By leveraging AI's skills in areas such as resource optimization, crop health monitoring, and yield prediction, farmers might greatly advance toward more environmentally friendly agricultural practices. AI-controlled irrigation systems optimize water consumption by using sensors to monitor soil moisture levels and weather conditions to distribute water accordingly. This water management technology can lower water consumption up to 30%.[21]Artificial intelligence (AI) mobile soil analysis enables farmers to enhance soil fertility while decreasing their ecological footprint. This technology permits on-site, real-time evaluations of soil nutrient levels.[22]Agrivoltaics enhances sustainable agriculture by optimizing land use—allowing crops to be grown alongside solar panels, which generate clean energy. This dual-use approach conserves land resources, improves microclimates, and can promote more resilient, eco-friendly farming practices.[23]
Practices that can cause long-term damage tosoilinclude excessivetillingof the soil (leading toerosion) andirrigationwithout adequate drainage (leading tosalinization).[24][25]
The most important factors for a farming site areclimate, soil,nutrientsandwater resources. Of the four, water andsoil conservationare the most amenable to human intervention. When farmers grow and harvest crops, they remove some nutrients from the soil. Without replenishment, the land suffers fromnutrient depletionand becomes either unusable or suffers from reducedyields. Sustainable agriculture depends on replenishing the soil while minimizing the use or need of non-renewable resources, such asnatural gasor mineral ores.
A farm that can "produce perpetually", yet has negative effects on environmental quality elsewhere is not sustainable agriculture. An example of a case in which a global view may be warranted is the application offertilizerormanure, which can improve the productivity of a farm but can pollute nearby rivers and coastal waters (eutrophication).[26]The other extreme can also be undesirable, as the problem of low crop yields due to exhaustion of nutrients in the soil has been related torainforestdestruction.[27]In Asia, the specific amount of land needed for sustainable farming is about 12.5 acres (5.1 ha) which include land for animal fodder, cereal production as a cash crop, and other food crops. In some cases, a small unit of aquaculture is included (AARI-1996).
Nitrates are used widely in farming as fertilizer. Unfortunately, a major environmental problem associated with agriculture is the leaching of nitrates into the environment.[28]Possible sources ofnitratesthat would, in principle, be available indefinitely, include:
The last option was proposed in the 1970s, but is only gradually becoming feasible.[32][33]Sustainable options for replacing other nutrient inputs such as phosphorus and potassium are more limited.
Other options includelong-term crop rotations, returning to natural cycles that annually flood cultivated lands (returning lost nutrients) such as theflooding of the Nile, the long-term use ofbiochar, and use of crop and livestocklandracesthat are adapted to less than ideal conditions such as pests, drought, or lack of nutrients. Crops that require high levels of soil nutrients can be cultivated in a more sustainable manner with appropriate fertilizer management practices.
Phosphateis a primary component infertilizer. It is the second most important nutrient for plants after nitrogen,[34]and is often a limiting factor.[35]It is important for sustainable agriculture as it can improve soil fertility and crop yields.[36]Phosphorus is involved in all major metabolic processes including photosynthesis, energy transfer, signal transduction, macromolecular biosynthesis, and respiration. It is needed for root ramification and strength and seed formation, and can increase disease resistance.[37]
Phosphorus is found in the soil in both inorganic and organic forms[34]and makes up approximately 0.05% of soil biomass.[37]Phosphorus fertilizers are the main input of inorganic phosphorus in agricultural soils and approximately 70%–80% of phosphorus in cultivated soils is inorganic.[38]Long-term use of phosphate-containing chemical fertilizers causeseutrophicationand deplete soil microbial life, so people have looked to other sources.[37]
Phosphorus fertilizers are manufactured fromrock phosphate.[39]However, rock phosphate is a non-renewable resource and it is being depleted by mining for agricultural use:[36][38]peak phosphoruswill occur within the next few hundred years,[40][41][42]or perhaps earlier.[43][44][45]
Potassium is a macronutrient very important for plant development and is commonly sought in fertilizers.[46]This nutrient is essential for agriculture because it improves water retention, nutrient value, yield, taste, color, texture and disease resistance of crops. It is often used in the cultivation of grains, fruits, vegetables, rice, wheat, millets, sugar, corn, soybeans,palm oiland coffee.[47]
Potassium chloride (KCl) represents the most widely source of K used in agriculture,[48]accounting for 90% of all potassium produced for agricultural use.[49]
The use of KCl leads to high concentrations of chloride (Clˉ) in soil harming its health due to the increase in soil salinity, imbalance in nutrient availability and this ion's biocidal effect for soil organisms.[7]In consequences the development of plants and soil organisms is affected, putting at risksoil biodiversityand agricultural productivity.[50][51][52][53]A sustainable option for replacing KCl are chloride-free fertilizers, its use should take into account plants' nutrition needs, and the promotion of soil health.[54][55]
Land degradationis becoming a severe global problem. According to theIntergovernmental Panel on Climate Change: "About a quarter of the Earth's ice-free land area is subject to human-induced degradation (medium confidence). Soil erosion from agricultural fields is estimated to be currently 10 to 20 times (no tillage) to more than 100 times (conventional tillage) higher than the soil formation rate (medium confidence)."[56]Almost half of the land on earth is covered with dry land, which is susceptible to degradation.[57]Over a billion tonnes of southern Africa's soil are being lost to erosion annually, which if continued will result in halving of crop yields within thirty to fifty years.[58]A comparative study of two adjacent wheat farms—one using sustainable practices and the other conventional methods—found that the sustainable farm had significantly better soil quality, including higher organic matter, microbial populations, and nutrient content, while also showing 22.4% higher net returns due to lower input costs, despite slightly lower yields.[59]Impropersoil managementis threatening the ability to grow sufficient food.Intensive agriculturereduces thecarbonlevel in soil, impairing soil structure, crop growth and ecosystem functioning,[60]and acceleratingclimate change.[60]Modification of agricultural practices is a recognized method ofcarbon sequestrationas soil can act as an effectivecarbon sink.[61]
Soil management techniques includeno-till farming,keyline designandwindbreaksto reduce wind erosion,reincorporation of organic matterinto the soil, reducingsoil salinization, and preventing water run-off.[62][63]
As the global population increases and demand for food increases, there is pressure on land as a resource. Inland-use planningand management, considering the impacts ofland-use changeson factors such as soil erosion can support long-term agricultural sustainability, as shown by a study of Wadi Ziqlab, a dry area in the Middle East where farmers graze livestock and grow olives, vegetables, and grains.[64]
Looking back over the 20th century shows that for people in poverty, following environmentally sound land practices has not always been a viable option due to many complex and challenging life circumstances.[65]Currently, increasedland degradationin developing countries may be connected with rural poverty among smallholder farmers when forced into unsustainable agricultural practices out of necessity.[66]
Converting big parts of the land surface to agriculture has severe environmental and health consequences. For example, it leads to rise inzoonotic disease(like theCoronavirus disease 2019) due to the degradation of natural buffers between humans and animals, reducing biodiversity and creating larger groups of genetically similar animals.[67][68]
Land is a finite resource on Earth. Although expansion of agricultural land can decreasebiodiversityand contribute todeforestation, the picture is complex; for instance, a study examining the introduction of sheep by Norse settlers (Vikings) to the Faroe Islands of the North Atlantic concluded that, over time, the fine partitioning of land plots contributed more to soil erosion and degradation than grazing itself.[69]
TheFood and Agriculture Organizationof the United Nations estimates that in coming decades, cropland will continue to be lost to industrial andurban development, along with reclamation of wetlands, and conversion of forest to cultivation, resulting in theloss of biodiversityand increased soil erosion.[70]
In modern agriculture, energy is used in on-farm mechanisation, food processing, storage, and transportation processes.[71]It has therefore been found that energy prices are closely linked tofood prices.[72]Oil is also used as an input inagricultural chemicals. TheInternational Energy Agencyprojects higher prices of non-renewable energy resources as a result of fossil fuel resources being depleted. It may therefore decrease globalfood securityunless action is taken to 'decouple' fossil fuel energy from food production, with a move towards 'energy-smart' agricultural systems includingrenewable energy.[72][73][74]The use of solar powered irrigation inPakistanis said to be a closed system for agricultural water irrigation.[75]
The environmental cost of transportation could be avoided if people use local products.[76]
In some areas sufficientrainfallis available for crop growth, but many other areas requireirrigation. For irrigation systems to be sustainable, they require proper management (to avoidsalinization) and must not use more water from their source than is naturally replenishable. Otherwise, the water source effectively becomes anon-renewable resource. Improvements in waterwell drillingtechnology andsubmersible pumps, combined with the development ofdrip irrigationand low-pressure pivots, have made it possible to regularly achieve high crop yields in areas where reliance on rainfall alone had previously made successful agriculture unpredictable. However, this progress has come at a price. In many areas, such as theOgallala Aquifer, the water is being used faster than it can be replenished.
According to the UC Davis Agricultural Sustainability Institute, several steps must be taken to develop drought-resistant farming systems even in "normal" years with average rainfall. These measures include both policy and management actions:[77]
Indicators for sustainable water resource development include the average annual flow of rivers from rainfall, flows from outside a country, the percentage of water coming from outside a country, and gross water withdrawal.[78]It is estimated that agricultural practices consume 69% of the world's fresh water.[79]
Farmers discovered a way to save water using wool in Wyoming and other parts of the United States.[80]
Sustainable agriculture attempts to solve multiple problems with one broad solution. The goal of sustainable agricultural practices is to decreaseenvironmental degradationdue to farming while increasing crop–and thus food–output. There are many varying strategies attempting to use sustainable farming practices in order to increase rural economic development withinsmall-scale farmingcommunities. Two of the most popular and opposing strategies within the modern discourse are allowing unrestricted markets to determine food production and deeming food ahuman right. Neither of these approaches have been proven to work without fail. A promising proposal torural povertyreduction within agricultural communities is sustainable economic growth; the most important aspect of this policy is to regularly include the poorest farmers in the economy-wide development through the stabilization of small-scale agricultural economies.[81]
In 2007, theUnited Nationsreported on "Organic AgricultureandFood Securityin Africa", stating that using sustainable agriculture could be a tool in reaching global food security without expanding land usage and reducingenvironmental impacts.[82]There has been evidence provided by developing nations from the early 2000s stating that when people in their communities are not factored into the agricultural process that serious harm is done. Thesocial scientistCharles Kellogg has stated that, "In a final effort, exploited people pass their suffering to the land."[82]Sustainable agriculture mean the ability to permanently and continuously "feed its constituent populations".[82]
There are a lot of opportunities that can increase farmers' profits, improve communities, and continue sustainable practices. For example, inUganda,Genetically Modified Organismswere originally illegal. However, with the stress of banana crisis in Uganda, whereBanana Bacterial Wilthad the potential to wipe out 90% of yield, they decided to explore GMOs as a possible solution.[83]The government issued the National Biotechnology and Biosafety bill, which will allow scientists that are part of the National Banana Research Program to start experimenting with genetically modified organisms.[84]This effort has the potential to help local communities because a significant portionlive off the food they grow themselves, and it will beprofitablebecause the yield of their main produce will remain stable.
Not all regions are suitable for agriculture.[85][86]The technological advancement of the past few decades has allowed agriculture to develop in some of these regions. For example, Nepal has builtgreenhousesto deal with its high altitude and mountainous regions.[34]Greenhouses allow for greater crop production and also use less water since they are closed systems.[87]
Desalinationtechniques can turn salt water into fresh water which allows greater access to water for areas with a limited supply.[88]This allows the irrigation of crops without decreasing natural fresh water sources.[89]While desalination can be a tool to provide water to areas that need it to sustain agriculture, it requires money and resources. Regions of China have been considering large scale desalination in order to increase access to water, but the current cost of the desalination process makes it impractical.[90]
Women working in sustainable agriculture come from numerous backgrounds, ranging from academia to labour.[91]From 1978-2007, in theUnited States, the number of women farm operators has tripled.[85]In 2007, women operated 14 percent of farms, compared to five percent in 1978. Much of the growth is due to women farming outside of the "male dominated field of conventional agriculture".[85]
The practice of growing food in the backyard of houses, schools, etc., by families or by communities became widespread in the US at the time ofWorld War I, theGreat DepressionandWorld War II, so that in one point of time 40% of the vegetables of the USA was produced in this way. The practice became more popular again in the time of theCOVID-19 pandemic. This method permits to grow food in a relatively sustainable way and at the same time can make it easier forpoor peopleto obtain food.[92]
Costs, such as environmental problems, not covered in traditionalaccountingsystems (which take into account only the direct costs of production incurred by the farmer) are known asexternalities.[17]
Netting studied sustainability andintensive agricultureinsmallholdersystems through history.[93]
There are several studies incorporating externalities such as ecosystem services, biodiversity, land degradation, andsustainable land managementin economic analysis. These includeThe Economics of Ecosystems and Biodiversitystudy and theEconomics of Land Degradation Initiativewhich seek to establish an economic cost-benefit analysis on the practice of sustainable land management and sustainable agriculture.
Triple bottom lineframeworks include social and environmental alongside a financial bottom line. A sustainable future can be feasible if growth in material consumption and population is slowed down and if there is a drastic increase in the efficiency of material and energy use. To make that transition, long- and short-term goals will need to be balanced enhancingequityand quality of life.[94]
The barriers to sustainable agriculture can be broken down and understood through three different dimensions. These three dimensions are seen as the core pillars tosustainability: social, environmental, and economic pillars.[95]The social pillar addresses issues related to the conditions in which societies are born into, growing in, and learning from.[95]It deals with shifting away from traditional practices of agricultural and moving into new sustainable practices that will create better societies and conditions.[95]The environmental pillar addresses climate change and focuses on agricultural practices that protect the environment for future generations.[95]The economic pillar discovers ways in which sustainable agriculture can be practiced while fostering economic growth and stability, with minimal disruptions to livelihoods.[95]All three pillars must be addressed to determine and overcome the barriers preventing sustainable agricultural practices.[95]
Social barriers to sustainable agriculture include cultural shifts, the need for collaboration, incentives, and new legislation.[95]The move from conventional to sustainable agriculture will require significant behavioural changes from both farmers and consumers.[96]Cooperation and collaboration between farmers is necessary to successfully transition to sustainable practices with minimal complications.[96]This can be seen as a challenge for farmers who care about competition and profitability.[97]There must also be an incentive for farmers to change their methods of agriculture.[98]The use of public policy, advertisements, and laws that make sustainable agriculture mandatory or desirable can be utilized to overcome these social barriers.[99]
Environmental barriers prevent the ability to protect and conserve the natural ecosystem.[95]Examples of these barriers include the use ofpesticidesand the effects of climate change.[95]Pesticides are widely used to combat pests that can devastate production and plays a significant role in keeping food prices and production costs low.[100]To move toward sustainable agriculture, farmers are encouraged to utilize green pesticides, which cause less harm to both human health and habitats, but would entail a higher production cost.[101]Climate changeis also a rapidly growing barrier, one that farmers have little control over, which can be seen through place-based barriers.[102]These place-based barriers include factors such as weather conditions,topography, andsoil qualitywhich can cause losses in production, resulting in the reluctance to switch from conventional practices.[102]Many environmental benefits are also not visible or immediately evident.[103]Significant changes such as lower rates of soil andnutrientloss, improvedsoil structure, and higher levels of beneficialmicroorganismstake time.[103]Inconventional agriculture, the benefits are easily visible with no weeds, pests, etc..., but the long term costs to the soil and surroundingecosystemsare hidden and "externalized".[103]Conventional agricultural practices since the evolution of technology have caused significant damage to the environment throughbiodiversity loss, disrupted ecosystems, poor water quality, among other harms.[98]
The economic obstacles to implementing sustainable agricultural practices include low financial return/profitability, lack of financial incentives, and negligible capital investments.[104]Financial incentives and circumstances play a large role in whether sustainable practices will be adopted.[95][104]The human and material capital required to shift to sustainable methods of agriculture requires training of the workforce and making investments in new technology and products, which comes at a high cost.[95][104]In addition to this, farmers practicing conventional agriculture can mass produce their crops, and therefore maximize their profitability.[95]This would be difficult to do in sustainable agriculture which encourages low production capacity.[95]
The authorJames Howard Kunstlerclaims almost all modern technology is bad and that there cannot be sustainability unless agriculture is done in ancient traditional ways.[105]Efforts toward more sustainable agriculture are supported in the sustainability community, however, these are often viewed only as incremental steps and not as an end.[98]One promising method of encouraging sustainable agriculture is throughlocal farmingandcommunity gardens.[98]Incorporating local produce and agricultural education into schools, communities, and institutions can promote the consumption of freshly grown produce which will drive consumer demand.[98]
Some foresee a true sustainablesteady state economythat may be very different from today's: greatly reduced energy usage, minimalecological footprint, fewerconsumer packaged goods,local purchasingwithshort food supply chains, littleprocessed foods, more home andcommunity gardens, etc.[106]
There is a debate on the definition of sustainability regarding agriculture. The definition could be characterized by two different approaches:an ecocentric approach and a technocentric approach.[107]The ecocentric approach emphasizes no- or low-growth levels of human development, and focuses onorganicandbiodynamic farmingtechniques with the goal of changing consumption patterns, and resource allocation and usage. The technocentric approach argues thatsustainabilitycan be attained through a variety of strategies, from the view that state-led modification of the industrial system like conservation-oriented farming systems should be implemented, to the argument thatbiotechnologyis the best way to meet the increasing demand for food.[107]
One can look at the topic of sustainable agriculture through two different lenses:multifunctional agricultureandecosystem services.[108]Both of these approaches are similar, but look at the function of agriculture differently. Those that employ the multifunctional agriculture philosophy focus on farm-centered approaches, and define function as being the outputs of agricultural activity.[108]The central argument of multifunctionality is that agriculture is a multifunctional enterprise with other functions aside from the production of food and fiber. These functions include renewable resource management, landscape conservation and biodiversity.[109]The ecosystem service-centered approach posits that individuals and society as a whole receive benefits fromecosystems, which are called "ecosystem services".[108][110]In sustainable agriculture, the services that ecosystems provide includepollination,soil formation, andnutrient cycling, all of which are necessary functions for the production of food.[111]
It is also claimed sustainable agriculture is best considered as an ecosystem approach to agriculture, calledagroecology.[112]
Most agricultural professionals agree that there is a "moral obligation to pursue [the] goal [of] sustainability."[82]The major debate comes from what system will provide a path to that goal because if an unsustainable method is used on a large scale it will have a massive negative effect on the environment and human population.
Other practices includepolyculture, growing a diverse number of perennial crops in a single field, each of which would grow in separate seasons so as not to compete with each other for natural resources.[113]This system would result in increased resistance to diseases and decreased effects of erosion and loss of nutrients in the soil.Nitrogen fixationfrom legumes, for example, used in conjunction with plants that rely on nitrate from the soil for growth, helps to allow the land to be reused annually. Legumes will grow for a season and replenish the soil with ammonium and nitrate, and the next season other plants can be seeded and grown in the field in preparation for harvest.
Sustainable methods of weed management may help reduce the development of herbicide-resistant weeds.[114]Crop rotationmay also replenish nitrogen iflegumesare used in the rotations and may also use resources more efficiently.[115]
There are also many ways to practice sustainableanimal husbandry. Some of the tools tograzingmanagement include fencing off the grazing area into smaller areas calledpaddocks, lowering stock density, and moving the stock between paddocks frequently.[116]
Within the realm of sustainable agriculture methods, the integration of different farming practices is gaining recognition for its potential to enhance efficiency and reduce environmental impact. For example, research on integrated wheat-fish farming in Egypt has demonstrated increased overall productivity and potential for reduced reliance on external inputs.[117]This approach, aligning with principles seen in other integrated systems like rice-fish culture, underscores the value of diversifying the combing farming activities for greater sustainability.
An increased production is a goal ofintensification. Sustainable intensification encompasses specific agriculture methods that increase production and at the same time help improve environmental outcomes. The desired outcomes of the farm are achieved without the need for more land cultivation or destruction of natural habitat; the system performance is upgraded with no net environmental cost. Sustainable Intensification has become a priority for the United Nations. Sustainable intensification differs from prior intensification methods by specifically placing importance on broader environmental outcomes. By 2018; it was predicted in 100 nations a combined total of 163 million farms used sustainable intensification. The amount of agricultural land covered by this is 453 million ha of land. That amount of land is equal to 29% of farms worldwide.[118]In light of concerns aboutfood security,human populationgrowth and dwindling land suitable for agriculture, sustainable intensive farming practises are needed to maintain highcrop yields, while maintainingsoil healthandecosystem services. The capacity for ecosystem services to be strong enough to allow a reduction in use of non-renewable inputs whilst maintaining or boosting yields has been the subject of much debate. Recent work in irrigated rice production system of east Asia has suggested that – in relation to pest management at least – promoting the ecosystem service of biological control using nectar plants can reduce the need for insecticides by 70% whilst delivering a 5% yield advantage compared with standard practice.[119]
Vertical farmingis a concept with the potential advantages of year-round production, isolation from pests and diseases, controllable resource recycling and reduced transportation costs.[120]
Water efficiencycan be improved by reducing the need forirrigationand using alternative methods. Such methods include: researching ondroughtresistant crops, monitoring planttranspirationand reducing soilevaporation.[121]
Drought resistant crops have been researched extensively as a means to overcome the issue ofwater shortage. They aremodified geneticallyso they can adapt in an environment with little water. This is beneficial as it reduces the need for irrigation and helps conserve water. Although they have been extensively researched, significant results have not been achieved as most of the successful species will have no overall impact on water conservation. However, some grains likerice, for example, have been successfully genetically modified to be drought resistant.[122]
Soil amendmentsinclude using compost from recycling centers. Using compost from yard and kitchen waste uses available resources in the area.
Abstinence from soiltillagebefore planting and leaving the plant residue afterharvestingreduces soil water evaporation; It also serves to prevent soil erosion.[123]
Crop residues left covering the surface of the soil may result in reduced evaporation of water, a lower surface soil temperature, and reduction of wind effects.[123]
A way to makerock phosphatemore effective is to add microbial inoculates such as phosphate-solubilizing microorganisms, known as PSMs, to the soil.[35][86]These solubilize phosphorus already in the soil and use processes like organic acid production and ion exchange reactions to make that phosphorus available for plants.[86]Experimentally, these PSMs have been shown to increase crop growth in terms of shoot height, dry biomass and grain yield.[86]
Phosphorus uptake is even more efficient with the presence ofmycorrhizaein the soil.[124]Mycorrhiza is a type ofmutualistic symbioticassociation between plants and fungi,[124]which are well-equipped to absorb nutrients, including phosphorus, in soil.[125]These fungi can increase nutrient uptake in soil where phosphorus has been fixed by aluminum, calcium, and iron.[125]Mycorrhizae can also release organic acids that solubilize otherwise unavailable phosphorus.[125]
Soil steamingcan be used as an alternative to chemicals for soil sterilization. Different methods are available to induce steam into the soil to kill pests and increase soil health.
Solarizing is based on the same principle, used to increase the temperature of the soil to kill pathogens and pests.[126]
Certain plants can be cropped for use asbiofumigants, "natural"fumigants, releasing pest suppressing compounds when crushed, ploughed into the soil, and covered in plastic for four weeks. Plants in theBrassicaceaefamily release large amounts of toxic compounds such asmethyl isothiocyanates.[127][128]
Relocating current croplands to environmentally moreoptimallocations, whilst allowing ecosystems in then-abandoned areas toregeneratecould substantially decrease the current carbon, biodiversity, and irrigation water footprint of global crop production, with relocation only within national borders also having substantial potential.[129][130]
Sustainability may also involvecrop rotation.[131]Crop rotation andcover cropspreventsoil erosion, by protectingtopsoilfrom wind and water.[34]Effective crop rotation can reduce pest pressure on crops, provides weed control, reduces disease build up, and improves the efficiency ofsoil nutrientsand nutrient cycling.[132]This reduces the need for fertilizers andpesticides.[131]Increasing the diversity of crops by introducing newgenetic resourcescan increase yields by 10 to 15 percent compared to when they are grown in monoculture.[132][133]Perennial cropsreduce the need fortillageand thus help mitigate soil erosion, and may sometimes tolerate drought better, increase water quality and help increase soil organic matter. There are research programs attempting to develop perennial substitutes for existing annual crops, such as replacing wheat with the wild grassThinopyrum intermedium, or possible experimental hybrids of it and wheat.[134]Being able to do all of this without the use of chemicals is one of the main goals of sustainability which is why crop rotation is a very central method of sustainable agriculture.[132]
Sustainable agriculture is not limited to practices within individual plots but can also be considered at the landscape scale. This broader approach is particularly relevant for reconciling biodiversity conservation while maintaining sufficient agricultural production. In this context, two landscape management strategies are traditionally opposed: land sparing and land sharing.[135][136][137]These two strategies have generated intense debate within the scientific community for over a decade, with no clear consensus emerging on their respective effectiveness in maximizing biodiversity conservation and agricultural production.[138]This is particularly pertinent given that effectiveness appears to vary according to the landscape context. Recently, a third alternative has been introduced to the debate: the so-called land blending strategy, an intermediate approach at the interface between the two traditional ones.[139]
Land sparing is a strategy that involves strictly separating land dedicated to agricultural production from land dedicated to conservation.[136][140]This strategy promotes increasing the yield of agricultural land, particularly throughintensive farming. This is done to preserve areas of major biodiversity interest from agricultural expansion. This has been the dominant strategy in developed countries for over 150 years.
Land sharing, also known as "wildlife-friendly agriculture", involves integrating biodiversity into agricultural production areas.[136][140]This approach focuses on the combination of agriculture and biodiversity by reducing the intensification of farming practices. This is illustrated notably by agroecological practices such as agroforestry and mixed crop-livestock systems.
Previously described as a “mixed strategy”, a term considered too ambiguous in ecology, land blending is an intermediate approach between land sparing and land sharing.[139]Unlike these two traditionally opposed strategies, land blending offers a flexible combination of the two approaches, adapted to the specific landscape features. This third strategy has only recently emerged as a credible and viable alternative to the traditional land sparing and land sharing.[141][142]Recent research[139]has highlighted its potential for effectively reconciling biodiversity conservation and agricultural production, while enhancing resilience in the face of change and uncertainty.
Organic agriculture can be defined as:
an integrated farming system that strives for sustainability, the enhancement of soil fertility and biological diversity whilst, with rare exceptions, prohibiting synthetic pesticides, antibiotics, synthetic fertilizers, genetically modified organisms, and growth hormones.[143][144][145][146]
Some claimorganic agriculturemay produce the most sustainable products available for consumers in the US, where no other alternatives exist, although the focus of the organics industry is not sustainability.[131]
In 2018 the sales of organic products in USA reach $52.5 billion.[147]According to a USDA survey, two-thirds of Americans consume organic products at least occasionally.[148]
Ecological farming is a concept that focused on the environmental aspects of sustainable agriculture. Ecological farming includes all methods, including organic, which regenerateecosystem serviceslike: prevention ofsoil erosion, water infiltration and retention,carbon sequestrationin the form of humus, and increased biodiversity.[149]Many techniques are used includingno-till farming, multispecies cover crops, strip cropping, terrace cultivation, shelter belts, pasture cropping etc.
There are a plethora of methods and techniques that are employed when practicing ecological farming, all having their own unique benefits and implementations that lead to more sustainable agriculture. Crop genetic diversity is one method that is used to reduce the risks associated with monoculture crops, which can be susceptible to a changing climate.[150]This form of biodiversity causes crops to be more resilient, increasing food security and enhancing the productivity of the field on a long-term scale.[150]The use of biodigestors is another method which converts organic waste into a combustible gas, which can provide several benefits to an ecological farm: it can be used as a fuel source, fertilizer for crops and fish ponds, and serves as a method for removing wastes that are rich in organic matter.[151]Because biodigestors can be used as fertilizer, it reduces the amount of industrial fertilizers that are needed to sustain the yields of the farm. Another technique used is aquaculture integration, which combines fish farming with agricultural farming, using the wastes from animals and crops and diverting them towards the fish farms to be used up instead of being leeched into the environment.[152]Mud from the fish ponds can also be used to fertilize crops.[152]
Organic fertilizers can also be employed in an ecological farm, such as animal and green manure.[153]This allows soil fertility to be improved and well-maintained, leads to reduced costs and increased yields, reduces the usage of non-renewable resources in industrial fertilizers (Nitrogen and Phosphorus), and reduces the environmental pressures that are posed by intensive agricultural systems.[153]Precision Agriculture can also be used, which focuses on efficient removal of pests using non-chemical techniques and minimizes the amount of tilling needed to sustain the farm. An example of a precision machine is the false seedbed tiller, which can remove a great majority of small weeds while only tilling one centimeter deep.[154]This minimized tilling reduces the amount of new weeds that germinate from soil disturbance.[154]Other methods that reduce soil erosion include contour farming, strip cropping, and terrace cultivation.[155]
The challenge for ecological farming science is to be able to achieve a mainstream productivefood systemthat is sustainable or even regenerative. To enter the field of ecological farming, location relative to the consumer, can reduce the food miles factor to help minimise damage to the biosphere by combustion engine emissions involved in current food transportation.
Design of the ecological farm is initially constrained by the same limitations as conventional farming: local climate, the soil's physical properties, budget for beneficial soil supplements, manpower and available automatons; however long-term water management by ecological farming methods is likely to conserve and increase water availability for the location, and require far fewer inputs to maintain fertility.
Certain principles unique to ecological farming need to be considered.
Often thought of as inherently destructive,slash-and-burnorslash-and-charshifting cultivationhave been practiced in the Amazon for thousands of years.[164]
Some traditional systems combinepolyculturewith sustainability. In South-East Asia,rice-fish systemsonrice paddieshave raisedfreshwater fishas well as rice, producing an additional product and reducingeutrophicationof neighboring rivers.[165]A variant in Indonesia combines rice, fish, ducks and water fern; the ducks eat the weeds that would otherwise limit rice growth, saving labour and herbicides, while the duck and fish manure substitute for fertilizer.[166]
Raised field agriculture has been recently revived in certain areas of the world, such as theAltiplanoregion inBoliviaandPeru. This has resurged in the form of traditionalWaru Waruraised fields, which create nutrient-rich soil in regions where such soil is scarce. This method is extremely productive and has recently been utilized by indigenous groups in the area and the nearbyAmazon Basinto make use of lands that have been historically hard to cultivate.
Other forms of traditional agriculture include agro forestry, crop rotations, and water harvesting. Water harvesting is one of the largest and most common practices, particularly used in dry areas and seasons. In Ethiopia, over half of their GDP and over 80 percent of their exports are attributed to agriculture; yet, it is known for its intense droughts and dry periods.[167]Rain water harvesting is considered to be a low-cost alternative. This type of harvesting collects and stores water from roof tops during high-rain periods for use during droughts.[168]Rainwater harvestinghas been a large practice to help the country survive by focusing on runoff irrigation, roof water harvesting, and flood spreading.
Native Americans in the United States practiced sustainable agriculture through their subsistence farming techniques. Many tribes grew or harvested their own food from plants that thrived in their local ecosystems. Native American farming practices are specific to local environments and work with natural processes.[169]This is a practice calledPermaculture, and it involves a deep understanding of the local environment.[170]Native American farming techniques also incorporate local biodiversity into many of their practices, which helps the land remain healthy.[171]
Many indigenous tribes incorporatedIntercroppinginto their agriculture, which is a practice where multiple crops are planted together in the same area. This strategy allows crops to help one another grow through exchanged nutrients, maintained soil moisture, and physical supports for one another. The crops that are paired in intercropping often do not heavily compete for resources, which helps them to each be successful. For example, many tribes utilized intercropping in ways such as the Three Sisters Garden. This gardening technique consists of corn, beans, and squash. These crops grow in unity as the corn stalk supports the beans, the beans produce nitrogen, and the squash retain moisture.[172]Intercropping also provides a natural strategy for pest management and the prevention of weed growth. Intercropping is a natural agricultural practice that often improves the overall health of the soil and plants, increases crop yield, and is sustainable.[170]
One of the most significant aspects of indigenous sustainable agriculture is theirtraditional ecological knowledgeof harvesting. TheAnishinaabetribes follow an ideology known as "the Honorable Harvest". The Honorable Harvest is a set of practices that emphasize the idea that people should "take only what you need and use everything you take."[173]Resources are conserved through this practice because several rules are followed when harvesting a plant. These rules are to never take the first plant, never take more than half of the plants, and never take the last plant.[174]This encourages future growth of the plant and therefore leads to a sustainable use of the plants in the area.
Native Americans practicedagroforestryby managing the forest, animals, and crops together. They also helped promote tree growth through controlled burns andsilviculture. Often, the remaining ash from these burns would be used to fertilize their crops. By improving the conditions of the forest, the local wildlife populations also increased. Native Americans allowed their livestock to graze in the forest, which provided natural fertilizer for the trees as well.[170]
Regenerative agriculture is a conservation and rehabilitation approach to food and farming systems. It focuses ontopsoilregeneration, increasingbiodiversity,[175]improving thewater cycle,[176]enhancingecosystem services, supportingbiosequestration, increasingresilience to climate change, and strengthening the health and vitality of farm soil. Practices include, recycling as much farm waste as possible, and addingcompostedmaterial from sources outside the farm.[85][177][34][178]
Permacultureis an approach to land management and settlement design that adopts arrangements observed in flourishing naturalecosystems. It includes a set of design principles derived usingwhole-systems thinking. It applies these principles in fields such asregenerative agriculture, town planning,rewilding, andcommunity resilience. The term was coined in 1978 byBill MollisonandDavid Holmgren, who formulated the concept in opposition to modern industrialized methods, instead adopting a more traditional or "natural" approach to agriculture.[179][180][181]
Multiple thinkers in the early and mid-20th century exploredno-dig gardening,no-till farming, and the concept of "permanent agriculture", which were early inspirations for the field of permaculture.[182]Mollison and Holmgren's work from the 1970s and 1980s led to several books, starting withPermaculture Onein 1978, and to the development of the "Permaculture Design Course" which has been one of the main methods of diffusion of permacultural ideas.[183]Starting from a focus on land usage inSouthern Australia, permaculture has since spread in scope to include other regions and other topics, such asappropriate technologyandintentional communitydesign.[184]
Several concepts and practices unify the wide array of approaches labelled as permaculture. Mollison and Holmgren's three foundational ethics and Holmgren's twelve design principles are often cited and restated in permaculture literature.[183]Practices such ascompanion planting, extensive use ofperennialcrops, and designs such as theherb spiralhave been used extensively by permaculturists.
There is limited evidencepolyculturemay contribute to sustainable agriculture. A meta-analysis of a number of polycrop studies found that predator insect biodiversity was higher at comparable yields than conventional in certaintwo-crop systemswith a singlecash cropcombined with a cover crop.[187]
One approach to sustainability is to develop polyculture systems usingperennial cropvarieties. Such varieties are being developed for rice, wheat, sorghum, barley, and sunflowers. If these can be combined in polyculture with a leguminous cover crop such as alfalfa, fixation of nitrogen will be added to the system, reducing the need for fertilizer and pesticides.[134]
The use of available city space (e.g.,rooftop gardens,community gardens,garden sharing,organopónicos, and other forms ofurban agriculture) may be able to contribute to sustainability.[188]Some consider "guerrilla gardening" an example of sustainability in action[189]– in some cases seeds of edible plants have been sown in local rural areas.[190]
Hydroponics is an alternative to agriculture that creates the ideal environment for optimal growth without using a dormant medium. This innovative farming technique produces higher crop yields without compromising soil health. The most significant drawback of this sustainable farming technique is the cost associated with development.[191]
Certification systems are important to the agriculture community and to consumers as these standards determine the sustainability of produce. Numeroussustainability standards and certificationsystems exist, includingorganic certification,Rainforest Alliance,Fair Trade,UTZ Certified,GlobalGAP, Bird Friendly, and the Common Code for the Coffee Community (4C).[12]These standards specify rules that producers, manufacturers and traders need to follow so that the things they do, make, or grow do not hurt people and the environment.[192]These standards are also known as Voluntary Sustainability Standards (VSS) that are private standards that require products to meet specific economic, social orenvironmental sustainabilitymetrics. The requirements can refer to product quality or attributes, but also to production and processing methods, as well as transportation. VSS are mostly designed and marketed by non-governmental organizations (NGOs) or private firms and they are adopted by actors up and down the value chain, from farmers to retailers. Certifications and labels are used to signal the successful implementation of a VSS. According to the ITC standards map the mostly covered products by standards are agricultural products.[193]Around 500 VSS today apply to key exports of many developing countries, such as coffee, tea, bananas, cocoa, palm oil, timber, cotton, and organic agri-foods.[194]VSS are found to reduce eutrophication, water use, greenhouse gas emissions, and natural ecosystem conversion.[195]And thus are considered as a potential tool for sustainable agriculture.
TheUSDAproduces an organic label that is supported by nationalized standards of farmers and facilities. The steps for certification consist of creating an organic system plan, which determines how produce will be tilled, grazed, harvested, stored, and transported. This plan also manages and monitors the substances used around the produce, the maintenance needed to protect the produce, and any nonorganic products that may come in contact with the produce. The organic system plan is then reviewed and inspected by the USDA certifying agent. Once the certification is granted, the produce receives an approval sticker from the USDA and the produce is distributed across the U.S. In order to hold farmers accountable and ensure that Americans are receiving organic produce, these inspections are done at least once a year.[196]This is just one example of sustainable certification systems through produce maintenance.
Sustainable agriculture is a topic in international policy concerning its potential to reduce environmental risks. In 2011, the Commission on Sustainable Agriculture and Climate Change, as part of its recommendations for policymakers on achieving food security in the face of climate change, urged that sustainable agriculture must be integrated into national and international policy.[197]The Commission stressed that increasing weather variability and climate shocks will negatively affect agricultural yields, necessitating early action to drive change in agricultural production systems towards increasing resilience.[197]It also called for dramatically increased investments in sustainable agriculture in the next decade, including in national research and development budgets,land rehabilitation, economic incentives, and infrastructure improvement.[197]
During2021 United Nations Climate Change Conference, 45 countries pledged to give more than 4 billion dollars for transition to sustainable agriculture. The organization "Slow Food" expressed concern about the effectivity of the spendings, as they concentrate on technological solutions and reforestation en place of "a holistic agroecology that transforms food from a mass-produced commodity into part of a sustainable system that works within natural boundaries."[198]
Additionally, the Summit consisted of negotiations that led to heavily reducing CO2emissions, becoming carbon neutral, ending deforestation and reliance on coal, and limiting methane emissions.[199][200]
In November, the Climate Action Tracker reported that global efforts are on track to for a 2.7 °C temperature increase with current policies, finding that the current targets will not meet global needs as coal and natural gas consumption are primarily responsible for the gap in progress.[201][202]Since, like-minded developing countries, like those in Africa,[203]asked for an addendum to the agreement that removed the obligation for developing countries to meet the same requirements of wealthy nations.[204]
In May 2020 the European Union published a program, named "From Farm to Fork" for making its agriculture more sustainable. In the official page of the programFrom Farm to Forkis citedFrans Timmermansthe Executive Vice-President of the European Commission, saying that:
The coronavirus crisis has shown how vulnerable we all are, and how important it is to restore the balance between human activity and nature. At the heart of the Green Deal the Biodiversity and Farm to Fork strategies point to a new and better balance of nature, food systems, and biodiversity; to protect our people's health and well-being, and at the same time to increase the EU's competitiveness and resilience. These strategies are a crucial part of the great transition we are embarking upon.[205]
The program includes the next targets:
Policies from 1930 - 2000
TheNew Dealimplemented policies and programs that promoted sustainable agriculture. Under the Agriculture Adjustment Act of 1933, it provided farmers payments to create a supply management regime that capped production of important crops.[206][207][208]This allowed farmers to focus on growing food and not competing in the market based system. TheNew Dealalso provided a monetary incentive for farmers that left some of their fields unsown or ungrazed to order to improve the soil conditions.[206]The Cooperative Extension Service was also established that set up sharing funding responsibilities amongst theUSDA, land-grant universities, and local communities.[207]
The 1950s to 1990s was when the government switched its stance on agriculture policy which halted sustainable agriculture. TheAgricultural Act of 1954passed which supported farmers with flexible price supports, but only to commodity programs.[209]TheFood and Agricultural Act of 1965had new income support payments and continued supply controls but reduced priced supports.[209]Agriculture and Consumer Protection Act of 1973removed price supports and instead introduced target prices and deficiency payments.[209]It continued to promote commodity crops by lowering interest rates.Food Security Act of 1985continued commodity loan programs.[208][209]These policies incentivized profit over sustainability because the US government was promoting farms to maximize their production output instead of placing checks.[209]This meant that farms were being turned into food factories as they became bigger in size and grew morecommodity cropslike corn, wheat, and cotton. From 1900 to 2002, the number of farms in the US decreased significantly while the average size of a farm went up after 1950.[209][208]
Current Policies
In the United States, the federalNatural Resources Conservation Service(USDA) provides technical and financial assistance for those interested in pursuing natural resource conservation along with production agriculture. With programs likeSAREand China-UKSAINto help promote research on sustainable agriculture practices and a framework for agriculture and climate change respectively.
Future Policies
Currently, there are policies on the table that could move the US agriculture system into a more sustainable direction with theGreen New Deal. This policy promotes decentralizing agrarian governance by breaking up large commodity farms that were created in the 1950s to 1980s.[206]Decentralized governance within the farming community would allow for more adaptive management at local levels to help focus onclimate change mitigation,food security, and landscape-scale ecological stewardship.[206]TheGreen New Dealwould invest in public infrastructure to support farmers transition from industrial food regime and acquireagroecologicalskills.[206]Just like in theNew Deal, it would invest incooperativesand commons to share and redistribute resources like land, food, equipment, research facilities, personnel, and training programs.[206]All of these policies and programs would break down barriers that have prevented sustainable farmers and agriculture from taking place in the United States.[208]
In 2016, theChinese governmentadopted a plan to reduce China's meat consumption by 50%, for achieving more sustainable and healthy food system.[210][211]
In 2019, the National Basic Research Program orProgram 973funded research into Science and Technology Backyard (STB). STBs are hubs often created in rural areas with significant rates ofsmall-scale farmingthat combine knowledge of traditional practices with new innovations and technology implementation. The purpose of this program was to invest in sustainable farming throughout the country and increase food production while achieving few negative environmental effects. The program was ultimately proven to be successful, and the study found that the merging of traditional practices and appropriate technology was instrumental in higher crop yields.[212]
In collaboration with the Food and Land Use Coalition (FOLU), CEEW (council for energy, environment and water), has given an overview of the current state of sustainable agriculture practices and systems (SAPSs) in India.[213]India is aiming to scale-up SAPs, through policymakers, administrators, philanthropists, and other which represent a vital alternative to conventional, input-intensive agriculture. In idea these efforts identify 16 SAPSs – including agroforestry, crop rotation, rainwater harvesting, organic farming and natural farming – using agroecology as an investigative lens. In a conclusive understanding it is realised that sustainable agriculture is far from mainstream in India. Further proposals for several measures for promoting SAPSs, including restructured government support and rigorous evidence generation for benefits and implementation of sustainable farming are ongoing progress in Indian Agriculture.
An example of initiatives in India towards exploring the world of sustainable farming has been set by theSowgood foundationwhich is a nonprofit founded by educator Pragati Chaswal.[214]It started by teaching primary school children about sustainable farming by helping them farm on small farm strips in suburban farmhouses and gardens. Today many government and private schools in Delhi, India have adopted the sowgood foundation curriculum for sustainable farming for their students.
In 2012, the Israeli Ministry of Agriculture found itself at the height of the Israeli commitment to sustainable agriculture policy. A large factor of this policy was funding programs that made sustainable agriculture accessible to smaller Palestinian-Arab communities. The program was meant to create biodiversity, train farmers in sustainable agriculture methods, and hold regular meetings for agriculture stakeholders.[215]
In 1907, the American authorFranklin H. Kingdiscussed in his bookFarmers of Forty Centuriesthe advantages of sustainable agriculture and warned that such practices would be vital to farming in the future.[216]The phrase 'sustainable agriculture' was reportedly coined by the AustralianagronomistGordon McClymont.[217]The term became popular in the late 1980s.[172]There was an international symposium on sustainability in horticulture by the International Society of Horticultural Science at the International Horticultural Congress in Toronto in 2002.[218]At the following conference at Seoul in 2006, the principles were discussed further.[219]
This potential future inability to feed the world's population has been a concern since the English political economistThomas Malthusin the early 1800s, but has become increasingly important recently.[15]Starting at the very end of the twentieth and early twenty-first centuries, this issue became widely discussed in the U.S. because of growing anxieties of a rapidly increasing global population.Agriculturehas long been the biggest industry worldwide and requires significant land, water, and labor inputs. At the turn of the twenty-first century, experts questioned the industry's ability to keep up with population growth.[16]This debate led to concerns over globalfood insecurityand "solving hunger".[220]
This article incorporates text from afree contentwork. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken fromThe State of the World's Biodiversity for Food and Agriculture − In Brief, FAO, FAO.
|
https://en.wikipedia.org/wiki/Sustainable_agriculture
|
Data qualityrefers to the state ofqualitativeorquantitativepieces of information. There are many definitions of data quality, but data is generally considered high quality if it is "fit for [its] intended uses in operations,decision makingandplanning".[1][2][3]Data is deemed of high quality if it correctly represents the real-world construct to which it refers. Apart from these definitions, as the number of data sources increases, the question of internaldata consistencybecomes significant, regardless of fitness for use for any particular external purpose.
People's views on data quality can often be in disagreement, even when discussing the same set of data used for the same purpose. When this is the case,data governanceis used to form agreed upon definitions and standards for data quality. In such cases,data cleansing, includingstandardization, may be required in order to ensure data quality.[4]
Defining data quality is difficult due to the many contexts data are used in, as well as the varying perspectives among end users, producers, and custodians of data.[5]
From a consumer perspective, data quality is:[5]
From a business perspective, data quality is:
From a standards-based perspective, data quality is:
Arguably, in all these cases, "data quality" is a comparison of the actual state of a particular set of data to a desired state, with the desired state being typically referred to as "fit for use," "to specification," "meeting consumer expectations," "free of defect," or "meeting requirements." These expectations, specifications, and requirements are usually defined by one or more individuals or groups, standards organizations, laws and regulations, business policies, or software development policies.[5]
Drilling down further, those expectations, specifications, and requirements are stated in terms of characteristics or dimensions of the data, such as:[5][6][7][8][11]
A systematic scoping review of the literature suggests that data quality dimensions and methods with real world data are not consistent in the literature, and as a result quality assessments are challenging due to the complex and heterogeneous nature of these data.[11]
Before the rise of the inexpensivecomputer data storage, massivemainframecomputers were used to maintain name and address data for delivery services. This was so that mail could be properly routed to its destination. The mainframes used business rules to correct common misspellings and typographical errors in name and address data, as well as to track customers who had moved, died, gone to prison, married, divorced, or experienced other life-changing events. Government agencies began to make postal data available to a few service companies to cross-reference customer data with the National Change of Address registry(NCOA). This technology saved large companies millions of dollars in comparison to manual correction of customer data. Large companies saved on postage, as bills and direct marketing materials made their way to the intended customer more accurately. Initially sold as a service, data quality moved inside the walls of corporations, as low-cost and powerful server technology became available.[citation needed]
Companies with an emphasis on marketing often focused their quality efforts on name and address information, but data quality is recognized[by whom?]as an important property of all types of data. Principles of data quality can be applied to supply chain data, transactional data, and nearly every other category of data found. For example, making supply chain data conform to a certain standard has value to an organization by: 1) avoiding overstocking of similar but slightly different stock; 2) avoiding false stock-out; 3) improving the understanding of vendor purchases to negotiate volume discounts; and 4) avoiding logistics costs in stocking and shipping parts across a large organization.[citation needed]
For companies with significant research efforts, data quality can include developingprotocolsfor research methods, reducingmeasurement error,bounds checkingof data,cross tabulation, modeling andoutlierdetection, verifyingdata integrity, etc.[citation needed]
There are a number of theoretical frameworks for understanding data quality. A systems-theoretical approach influenced by American pragmatism expands the definition of data quality to include information quality, and emphasizes the inclusiveness of the fundamental dimensions of accuracy and precision on the basis of the theory of science (Ivanov, 1972). One framework, dubbed "Zero Defect Data" (Hansen, 1991) adapts the principles of statistical process control to data quality. Another framework seeks to integrate the product perspective (conformance to specifications) and theserviceperspective (meeting consumers' expectations) (Kahn et al. 2002). Another framework is based insemioticsto evaluate the quality of the form, meaning and use of the data (Price and Shanks, 2004). One highly theoretical approach analyzes theontologicalnature ofinformation systemsto define data quality rigorously (Wand and Wang, 1996).
A considerable amount of data quality research involves investigating and describing various categories of desirable attributes (or dimensions) of data. Nearly 200 such terms have been identified and there is little agreement in their nature (are these concepts, goals or criteria?), their definitions or measures (Wang et al., 1993). Software engineers may recognize this as a similar problem to "ilities".
MIThas an Information Quality (MITIQ) Program, led by Professor Richard Wang, which produces a large number of publications and hosts a significant international conference in this field (International Conference on Information Quality, ICIQ). This program grew out of the work done by Hansen on the "Zero Defect Data" framework (Hansen, 1991).
In practice, data quality is a concern for professionals involved with a wide range of information systems, ranging fromdata warehousingandbusiness intelligencetocustomer relationship managementandsupply chain management. One industry study estimated the total cost to the U.S. economy of data quality problems at over U.S. $600 billion per annum (Eckerson, 2002). Incorrect data – which includes invalid and outdated information – can originate from different data sources – through data entry, ordata migrationand conversion projects.[12]
In 2002, the USPS and PricewaterhouseCoopers released a report stating that 23.6 percent of all U.S. mail sent is incorrectly addressed.[13]
One reason contact data becomes stale very quickly in the average database – more than 45 million Americans change their address every year.[14]
In fact, the problem is such a concern that companies are beginning to set up adata governanceteam whose sole role in the corporation is to be responsible for data quality. In some[who?]organizations, this data governance function has been established as part of a larger Regulatory Compliance function - a recognition of the importance of Data/Information Quality to organizations.
Problems with data quality don't only arise fromincorrectdata;inconsistentdata is a problem as well. Eliminatingdata shadow systemsand centralizing data in a warehouse is one of the initiatives a company can take to ensure data consistency.
Enterprises, scientists, and researchers are starting to participate within data curation communities to improve the quality of their common data.[15]
The market is going some way to providing data quality assurance. A number of vendors make tools for analyzing and repairing poor quality datain situ, service providers can clean the data on a contract basis and consultants can advise on fixing processes or systems to avoid data quality problems in the first place. Most data quality tools offer a series of tools for improving data, which may include some or all of the following:
ISO 8000is an international standard for data quality.[16]
Data quality assurance is the process ofdata profilingto discover inconsistencies and other anomalies in the data, as well as performingdata cleansing[17][18]activities (e.g. removingoutliers, missing datainterpolation) to improve the data quality.
These activities can be undertaken as part ofdata warehousingor as part of thedatabase administrationof an existing piece ofapplication software.[19]
Data quality controlis the process of controlling the usage of data for an application or a process. This process is performed both before and after a DataQuality Assurance(QA) process, which consists of discovery of data inconsistency and correction.
Before:
After QA process the following statistics are gathered to guide theQuality Control(QC) process:
The Data QC process uses the information from the QA process to decide to use the data for analysis or in an application or business process. General example: if a Data QC process finds that the data contains too many errors or inconsistencies, then it prevents that data from being used for its intended process which could cause disruption. Specific example: providing invalid measurements from several sensors to the automatic pilot feature on an aircraft could cause it to crash. Thus, establishing a QC process provides data usage protection.[citation needed]
Data Quality (DQ) is a niche area required for the integrity of the data management by covering gaps of data issues. This is one of the key functions that aid data governance by monitoring data to find exceptions undiscovered by current data management operations. Data Quality checks may be defined at attribute level to have full control on its remediation steps.[citation needed]
DQ checks and business rules may easily overlap if an organization is not attentive of its DQ scope. Business teams should understand the DQ scope thoroughly in order to avoid overlap. Data quality checks are redundant ifbusiness logiccovers the same functionality and fulfills the same purpose as DQ. The DQ scope of an organization should be defined in DQ strategy and well implemented. Some data quality checks may be translated into business rules after repeated instances of exceptions in the past.[citation needed]
Below are a few areas of data flows that may need perennial DQ checks:
CompletenessandprecisionDQ checks on all data may be performed at the point of entry for each mandatory attribute from each source system. Few attribute values are created way after the initial creation of the transaction; in such cases, administering these checks becomes tricky and should be done immediately after the defined event of that attribute's source and the transaction's other core attribute conditions are met.
All data having attributes referring toReference Datain the organization may be validated against the set of well-defined valid values of Reference Data to discover new or discrepant values through thevalidityDQ check. Results may be used to updateReference Dataadministered underMaster Data Management (MDM).
All data sourced from athird partyto organization's internal teams may undergoaccuracy(DQ) check against the third party data. These DQ check results are valuable when administered on data that made multiple hops after the point of entry of that data but before that data becomes authorized or stored for enterprise intelligence.
All data columns that refer toMaster Datamay be validated for itsconsistencycheck. A DQ check administered on the data at the point of entry discovers new data for the MDM process, but a DQ check administered after the point of entry discovers the failure (not exceptions) of consistency.
As data transforms, multiple timestamps and the positions of that timestamps are captured and may be compared against each other and its leeway to validate its value, decay, operational significance against a defined SLA (service level agreement). ThistimelinessDQ check can be utilized to decrease data value decay rate and optimize the policies of data movement timeline.
In an organization complex logic is usually segregated into simpler logic across multiple processes.ReasonablenessDQ checks on such complex logic yielding to a logical result within a specific range of values or static interrelationships (aggregated business rules) may be validated to discover complicated but crucial business processes and outliers of the data, its drift from BAU (business as usual) expectations, and may provide possible exceptions eventually resulting into data issues. This check may be a simple generic aggregation rule engulfed by large chunk of data or it can be a complicated logic on a group of attributes of a transaction pertaining to the core business of the organization. This DQ check requires high degree of business knowledge and acumen. Discovery of reasonableness issues may aid for policy and strategy changes by either business or data governance or both.
Conformitychecks andintegrity checksneed not covered in all business needs, it's strictly under the database architecture's discretion.
There are many places in the data movement where DQ checks may not be required. For instance, DQ check for completeness and precision on not–null columns is redundant for the data sourced from database. Similarly, data should be validated for its accuracy with respect to time when the data is stitched across disparate sources. However, that is a business rule and should not be in the DQ scope.[citation needed]
Regretfully, from a software development perspective, DQ is often seen as a nonfunctional requirement. And as such, key data quality checks/processes are not factored into the final software solution. Within Healthcare,wearable technologiesorBody Area Networks, generate large volumes of data.[20]The level of detail required to ensure data quality is extremely high and is often underestimated. This is also true for the vast majority ofmHealthapps,EHRsand other health related software solutions. However, some open source tools exist that examine data quality.[21]The primary reason for this, stems from the extra cost involved is added a higher degree of rigor within the software architecture.
The use of mobile devices in health, or mHealth, creates new challenges tohealth datasecurity and privacy, in ways that directly affect data quality.[2]mHealth is an increasingly important strategy for delivery of health services in low- and middle-income countries.[22]Mobile phones and tablets are used for collection, reporting, and analysis of data in near real time. However, these mobile devices are commonly used for personal activities, as well, leaving them more vulnerable to security risks that could lead to data breaches. Without proper security safeguards, this personal use could jeopardize the quality, security, and confidentiality of health data.[23]
Data quality has become a major focus of public health programs in recent years, especially as demand for accountability increases.[24]Work towards ambitious goals related to the fight against diseases such as AIDS, Tuberculosis, and Malaria must be predicated on strong Monitoring and Evaluation systems that produce quality data related to program implementation.[25]These programs, and program auditors, increasingly seek tools to standardize and streamline the process of determining the quality of data,[26]verify the quality of reported data, and assess the underlying data management and reporting systems for indicators.[27]An example is WHO and MEASURE Evaluation's Data Quality Review Tool[28]WHO, the Global Fund, GAVI, and MEASURE Evaluation have collaborated to produce a harmonized approach to data quality assurance across different diseases and programs.[29]
There are a number of scientific works devoted to the analysis of the data quality inopen datasources, such asWikipedia,Wikidata,DBpediaand other. In the case of Wikipedia, quality analysis may relate to the whole article[30]Modeling of quality there is carried out by means of various methods. Some of them usemachine learningalgorithms, includingRandom Forest,[31]Support Vector Machine,[32]and others. Methods for assessing data quality in Wikidata, DBpedia and otherLODsources differ.[33]
The Electronic Commerce Code Management Association (ECCMA) is a member-based, international not-for-profit association committed to improving data quality through the implementation of international standards. ECCMA is the current project leader for the development of ISO 8000 and ISO 22745, which are the international standards for data quality and the exchange of material and service master data, respectively. ECCMA provides a platform for collaboration amongst subject experts on data quality and data governance around the world to build and maintain global, open standard dictionaries that are used to unambiguously label information. The existence of these dictionaries of labels allows information to be passed from one computer system to another without losing meaning.[34]
|
https://en.wikipedia.org/wiki/Data_quality
|
Alogic gateis a device that performs aBoolean function, alogical operationperformed on one or morebinaryinputs that produces a single binary output. Depending on the context, the term may refer to anideal logic gate, one that has, for instance, zerorise timeand unlimitedfan-out, or it may refer to a non-ideal physical device[1](seeideal and real op-ampsfor comparison).
The primary way of building logic gates usesdiodesortransistorsacting aselectronic switches. Today, most logic gates are made fromMOSFETs(metal–oxide–semiconductorfield-effect transistors).[2]They can also be constructed usingvacuum tubes, electromagneticrelayswithrelay logic,fluidic logic,pneumatic logic,optics,molecules, acoustics,[3]or evenmechanicalor thermal[4]elements.
Logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all ofBoolean logic, and therefore, all of the algorithms andmathematicsthat can be described with Boolean logic.Logic circuitsinclude such devices asmultiplexers,registers,arithmetic logic units(ALUs), andcomputer memory, all the way up through completemicroprocessors,[5]which may contain more than 100 million logic gates.
Compound logic gatesAND-OR-Invert(AOI) andOR-AND-Invert(OAI) are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates.[6]
Thebinary number systemwas refined byGottfried Wilhelm Leibniz(published in 1705), influenced by the ancientI Ching's binary system.[7][8]Leibniz established that using the binary system combined the principles ofarithmeticandlogic.
Theanalytical enginedevised byCharles Babbagein 1837 used mechanical logic gates based on gears.[9]
In an 1886 letter,Charles Sanders Peircedescribed how logical operations could be carried out by electrical switching circuits.[10]EarlyElectromechanical computerswere constructed fromswitchesandrelay logicrather than the later innovations ofvacuum tubes(thermionic valves) ortransistors(from which later electronic computers were constructed).Ludwig Wittgensteinintroduced a version of the 16-rowtruth tableas proposition 5.101 ofTractatus Logico-Philosophicus(1921).Walther Bothe, inventor of thecoincidence circuit,[11]got part of the 1954Nobel Prizein physics, for the first modern electronic AND gate in 1924.Konrad Zusedesigned and built electromechanical logic gates for his computerZ1(from 1935 to 1938).
From 1934 to 1936,NECengineerAkira Nakashima,Claude ShannonandVictor Shestakovintroducedswitching circuit theoryin a series of papers showing thattwo-valuedBoolean algebra, which they discovered independently, can describe the operation of switching circuits.[12][13][14][15]Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digitalcomputers. Switching circuit theory became the foundation ofdigital circuitdesign, as it became widely known in the electrical engineering community during and afterWorld War II, with theoretical rigor superseding thead hocmethods that had prevailed previously.[15]
In 1948,BardeenandBrattainpatented an insulated-gate transistor (IGFET) with an inversion layer. Their concept forms the basis of CMOS technology today.[16]In 1957 Frosch and Derick were able to manufacturePMOSandNMOSplanar gates.[17]Later a team at Bell Labs demonstrated a working MOS with PMOS and NMOS gates.[18]Both types were later combined and adapted intocomplementary MOS(CMOS) logic byChih-Tang SahandFrank WanlassatFairchild Semiconductorin 1963.[19]
There are two sets of symbols for elementary logic gates in common use, both defined inANSI/IEEEStd 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings and derives fromUnited States Military StandardMIL-STD-806 of the 1950s and 1960s.[20]It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on ANSI Y32.14 and other early industry standards as later refined by IEEE and IEC, has rectangular outlines for all types of gate and allows representation of a much wider range of devices than is possible with the traditional symbols.[21]The IEC standard,IEC60617-12, has been adopted by other standards, such asEN60617-12:1999 in Europe,BSEN 60617-12:1999 in the United Kingdom, andDINEN 60617-12:1998 in Germany.
The mutual goal of IEEE Std 91-1984 and IEC 617-12 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium-scale circuits such as a 4-bit counter to a large-scale circuit such as a microprocessor.
IEC 617-12 and its renumbered successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them.[21]These are, however, shown in ANSI/IEEE Std 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." IEC 60617-12 correspondingly contains the note (Section 2.1) "Although non-preferred, the use of other symbols recognized by official national standards, that is distinctive shapes in place of symbols [list of basic gates], shall not be considered to be in contradiction with this standard. Usage of these other symbols in combination to form complex symbols (for example, use as embedded symbols) is discouraged." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another.
In the 1980s, schematics were the predominant method to design bothcircuit boardsand custom ICs known asgate arrays. Today custom ICs and thefield-programmable gate arrayare typically designed withHardware Description Languages(HDL) such asVerilogorVHDL.
By use ofDe Morgan's laws, anANDfunction is identical to anORfunction with negated inputs and outputs. Likewise, anORfunction is identical to anANDfunction with negated inputs and outputs. A NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs.
This leads to an alternative set of symbols for basic gates that use the opposite core symbol (ANDorOR) but with the inputs and outputs negated. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated.
A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor.
De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons.
Output comparison of various logic gates:
Charles Sanders Peirce(during 1880–1881) showed thatNOR gates alone(or alternativelyNAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933.[24]The first published proof was byHenry M. Shefferin 1913, so the NAND logical operation is sometimes calledSheffer stroke; thelogical NORis sometimes calledPeirce's arrow.[25]Consequently, these gates are sometimes calleduniversal logic gates.[26]
Logic gates can also be used to hold a state, allowing data storage. A storage element can be constructed by connecting several gates in a "latch" circuit. Latching circuitry is used instatic random-access memory. More complicated designs that useclock signalsand that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". Formally, a flip-flop is called abistable circuit, because it has two stable states which it can maintain indefinitely. The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called asequential logicsystem since its output can be influenced by its previous state(s), i.e. by thesequenceof input states. In contrast, the output fromcombinational logicis purely a combination of its present inputs, unaffected by the previous input and output states.
These logic circuits are used in computermemory. They vary in performance, based on factors ofspeed, complexity, and reliability of storage, and many different types of designs are used based on the application.
Afunctionally completelogic system may be composed ofrelays,valves(vacuum tubes), ortransistors.
Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gainvoltageamplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate.
For small-scale logic, designers now use prefabricated logic gates from families of devices such as theTTL7400 seriesbyTexas Instruments, theCMOS4000 seriesbyRCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced byprogrammable logic devices, which allow designers to pack many mixed logic gates into a single integrated circuit. The field-programmable nature ofprogrammable logic devicessuch asFPGAshas reduced the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed.
An important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered.
The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fan-outlimit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speedsynchronous circuits. Additional delay can be caused when many inputs are connected to an output, due to the distributedcapacitanceof all the inputs and wiring and the finite amount of current that each output can provide.
There are severallogic familieswith different characteristics (power consumption, speed, cost, size) such as:RDL(resistor–diode logic),RTL(resistor-transistor logic),DTL(diode–transistor logic),TTL(transistor–transistor logic) and CMOS. There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors.
The simplest family of logic gates usesbipolar transistors, and is calledresistor–transistor logic(RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in earlyintegrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting indiode–transistor logic(DTL).Transistor–transistor logic(TTL) then supplanted DTL.
As integrated circuits became more complex, bipolar transistors were replaced with smallerfield-effect transistors(MOSFETs); seePMOSandNMOS. To reduce power consumption still further, most contemporary chip implementations of digital systems now useCMOSlogic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power dissipation.
Other types of logic gates include, but are not limited to:[27]
A three-state logic gate is a type of logic gate that can have three different outputs: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which is strictly binary. These devices are used onbusesof theCPUto allow multiple chips to send data. A group of three-state outputs driving a line with a suitable control circuit is basically equivalent to amultiplexer, which may be physically distributed over separate devices or plug-in cards.
In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit.
Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as theHarvard Mark I, were built fromrelay logicgates, using electro-mechanicalrelays. Logic gates can be made usingpneumaticdevices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale.[29]Various types of fundamental logic gates have been constructed using molecules (molecular logic gates), which are based on chemical inputs and spectroscopic outputs.[30]Logic gates have been made out ofDNA(seeDNA nanotechnology)[31]and used to create a computer called MAYA (seeMAYA-II). Logic gates can be made fromquantum mechanicaleffects, seequantum logic gate.Photonic logicgates usenonlinear opticaleffects.
In principle any method that leads to a gate that isfunctionally complete(for example, either a NOR or a NAND gate) can be used to make any kind of digital logic circuit. Note that the use of 3-state logic for bus systems is not needed, and can be replaced by digital multiplexers, which can be built using only simple logic gates (such as NAND gates, NOR gates, or AND and OR gates).
|
https://en.wikipedia.org/wiki/Logic_gate#Symbols
|
Inlinguisticsand related fields,pragmaticsis the study of howcontextcontributes to meaning. The field of study evaluates how human language is utilized in social interactions, as well as the relationship between the interpreter and the interpreted.[1]Linguists who specialize in pragmatics are calledpragmaticians. The field has been represented since 1986 by theInternational Pragmatics Association(IPrA).
Pragmatics encompasses phenomena includingimplicature,speech acts,relevanceandconversation,[2]as well asnonverbal communication. Theories of pragmatics go hand-in-hand with theories ofsemantics, which studies aspects of meaning, andsyntax, which examines sentence structures, principles, and relationships. The ability to understand another speaker's intended meaning is calledpragmatic competence.[3][4][5]In 1938, Charles Morris first distinguished pragmatics as an independent subfield within semiotics, alongside syntax and semantics.[6]Pragmatics emerged as its own subfield in the 1950s after the pioneering work ofJ. L. AustinandPaul Grice.[7][8]
Pragmatics was a reaction tostructuralistlinguistics as outlined byFerdinand de Saussure. In many cases, it expanded upon his idea that language has an analyzable structure, composed of parts that can be defined in relation to others. Pragmatics first engaged only insynchronicstudy, as opposed to examining the historical development of language. However, it rejected the notion that all meaning comes fromsignsexisting purely in the abstract space oflangue. Meanwhile,historical pragmaticshas also come into being. The field did not gain linguists' attention until the 1970s, when two different schools emerged: the Anglo-American pragmatic thought and the European continental pragmatic thought (also called the perspective view).[9]
Ambiguity refers to when it is difficult to infer meaning without knowing the context, the identity of the speaker or the speaker's intent. For example, the sentence "You have a green light" is ambiguous, as without knowing the context, one could reasonably interpret it as meaning:
Another example of an ambiguous sentence is, "I went to the bank." This is an example of lexical ambiguity, as the word bank can either be in reference to a place where money is kept, or the edge of a river. To understand what the speaker is truly saying, it is a matter of context, which is why it is pragmatically ambiguous as well.[15]
Similarly, the sentence "Sherlock saw the man with binoculars" could mean that Sherlock observed the man by using binoculars, or it could mean that Sherlock observed a man who was holding binoculars (syntactic ambiguity).[16]The meaning of the sentence depends on an understanding of the context and the speaker's intent. As defined in linguistics, a sentence is an abstract entity: a string of words divorced from non-linguistic context, as opposed to anutterance, which is a concrete example of aspeech actin a specific context. The more closely conscious subjects stick to common words, idioms, phrasings, and topics, the more easily others can surmise their meaning; the further they stray from common expressions and topics, the wider the variations in interpretations. That suggests that sentences do not have intrinsic meaning, that there is no meaning associated with a sentence or word, and that either can represent an idea only symbolically.The cat sat on the matis a sentence in English. If someone were to say to someone else, "The cat sat on the mat", the act is itself an utterance. That implies that a sentence, term, expression or word cannot symbolically represent a single true meaning; such meaning is underspecified (which cat sat on which mat?) and potentially ambiguous. By contrast, the meaning of an utterance can be inferred through knowledge of both its linguistic and non-linguistic contexts (which may or may not be sufficient to resolve ambiguity). In mathematics, withBerry's paradox, there arises a similar systematic ambiguity with the word "definable".
The referential uses of language are howsignsare used to refer to certain items. A sign is the link or relationship between asignified and the signifieras defined byde SaussureandJean-René Huguenin. The signified is some entity or concept in the world. The signifier represents the signified. An example would be:
The relationship between the two gives the sign meaning. The relationship can be explained further by considering what is meant by "meaning." In pragmatics, there are two different types of meaning to consider:semantic-referential meaningandindexical meaning.[17]Semantic-referential meaning refers to the aspect of meaning, which describes events in the world that are independent of the circumstance they are uttered in. An example would be propositions such as:
In this case, the proposition is describing that Santa Claus eats cookies. The meaning of the proposition does not rely on whether or not Santa Claus is eating cookies at the time of its utterance. Santa Claus could be eating cookies at any time and the meaning of the proposition would remain the same. The meaning is simply describing something that is the case in the world. In contrast, the proposition, "Santa Claus is eating a cookie right now", describes events that are happening at the time the proposition is uttered.
Semantic-referential meaning is also present in meta-semantical statements such as:
If someone were to say that a tiger is a carnivorous animal in one context and a mammal in another, the definition of tiger would still be the same. The meaning of the sign tiger is describing some animal in the world, which does not change in either circumstance.
Indexicalmeaning, on the other hand, is dependent on the context of the utterance and has rules of use. By rules of use, it is meant that indexicals can tell when they are used, but not what they actually mean.
Whom "I" refers to, depends on the context and the person uttering it.
As mentioned, these meanings are brought about through the relationship between the signified and the signifier. One way to define the relationship is by placing signs in two categories:referential indexical signs,also called "shifters", andpure indexical signs.
Referential indexical signs are signs where the meaning shifts depending on the context hence the nickname "shifters." 'I' would be considered a referential indexical sign. The referential aspect of its meaning would be '1st person singular' while the indexical aspect would be the person who is speaking (refer above for definitions of semantic-referential and indexical meaning). Another example would be:
A pure indexical sign does not contribute to the meaning of the propositions at all. It is an example of a "non-referential use of language."
A second way to define the signified and signifier relationship isC.S. Peirce'sPeircean Trichotomy. The components of the trichotomy are the following:
These relationships allow signs to be used to convey intended meaning. If two people were in a room and one of them wanted to refer to a characteristic of a chair in the room he would say "this chair has four legs" instead of "a chair has four legs." The former relies on context (indexical and referential meaning) by referring to a chair specifically in the room at that moment while the latter is independent of the context (semantico-referential meaning), meaning the concept chair.[18]
Referringto things and people is a common feature of conversation, and conversants do socollaboratively. Individuals engaging indiscourseutilize pragmatics.[19]In addition, individuals within the scope of discourse cannot help but avoid intuitive use of certain utterances or word choices in an effort to create communicative success.[19]The study of referential language is heavily focused upondefinite descriptionsand referent accessibility. Theories have been presented for why direct referent descriptions occur in discourse.[20](In layman's terms: why reiteration of certain names, places, or individuals involved or as a topic of the conversation at hand are repeated more than one would think necessary.) Four factors are widely accepted for the use of referent language including (i) competition with a possible referent, (ii)salienceof the referent in the context of discussion (iii) an effort for unity of the parties involved, and finally, (iv) a blatant presence of distance from the last referent.[19]
Referential expressions are a form ofanaphora.[20]They are also a means of connecting past and present thoughts together to create context for information at hand. Analyzing the context of a sentence and determining whether or not the use of referent expression is necessary is highly reliant upon the author/speaker's digression- and is correlated strongly with the use of pragmatic competency.[20][19]
Michael Silversteinhas argued that "nonreferential" or "pure" indices do not contribute to an utterance's referential meaning but instead "signal some particular value of one or more contextual variables."[21]Although nonreferential indexes are devoid of semantico-referential meaning, they do encode "pragmatic" meaning.
The sorts of contexts that such indexes can mark are varied. Examples include:
In all of these cases, the semantico-referential meaning of the utterances is unchanged from that of the other possible (but often impermissible) forms, but the pragmatic meaning is vastly different.
J. L. Austinintroduced the concept of theperformative, contrasted in his writing with "constative" (i.e. descriptive) utterances. According to Austin's original formulation, a performative is a type of utterance characterized by two distinctive features:
Examples:
To be performative, an utterance must conform to various conditions involving what Austin callsfelicity. These deal with things like appropriate context and the speaker's authority. For instance, when a couple has been arguing and the husband says to his wife that he accepts her apology even though she has offered nothing approaching an apology, his assertion is infelicitous: because she has made neither expression of regret nor request for forgiveness, there exists none to accept, and thus no act of accepting can possibly happen.
Roman Jakobson, expanding on the work ofKarl Bühler, described six "constitutive factors" of aspeech event, each of which represents the privileging of a corresponding function, and only one of which is the referential (which corresponds to thecontextof the speech event). The six constitutive factors and their corresponding functions are diagrammed below.
The six constitutive factors of a speech event
Addresser --------------------- Addressee
The six functions of language
Emotive ----------------------- Conative
There is considerable overlap between pragmatics andsociolinguistics, since both share an interest inlinguistic meaningas determined by usage in a speech community. However, sociolinguists tend to be more interested in variations in language within such communities. Influences of philosophy and politics are also present in the field of pragmatics, as the dynamics of societies and oppression are expressed through language[24]
Pragmatics helps anthropologists relate elements of language to broader social phenomena; it thus pervades the field oflinguistic anthropology. Because pragmatics describes generally the forces in play for a given utterance, it includes the study of power, gender, race, identity, and their interactions with individual speech acts. For example, the study ofcode switchingdirectly relates to pragmatics, since a switch in code effects a shift in pragmatic force.[23]
According toCharles W. Morris, pragmatics tries to understand the relationship between signs and their users, whilesemanticstends to focus on the actual objects or ideas to which a word refers, andsyntax(or "syntactics") examines relationships among signs or symbols. Semantics is the literal meaning of an idea whereas pragmatics is the implied meaning of the given idea.
Speech Act Theory, pioneered byJ. L. Austinand further developed byJohn Searle, centers around the idea of theperformative, a type of utterance that performs the very action it describes. Speech Act Theory's examination ofIllocutionary Actshas many of the same goals as pragmatics, as outlinedabove.
Computational Pragmatics, as defined byVictoria Fromkin, concerns how humans can communicate their intentions to computers with as little ambiguity as possible.[25]That process, integral to the science ofnatural language processing(seen as a sub-discipline ofartificial intelligence), involves providing a computer system with some database of knowledge related to a topic and a series of algorithms, which control how the system responds to incoming data, using contextual knowledge to more accurately approximate natural human language and information processing abilities. Reference resolution, how a computer determines when two objects are different or not, is one of the most important tasks of computational pragmatics.
There has been a great amount of discussion on the boundary between semantics and pragmatics[26]and there are many different formalizations of aspects of pragmatics linked to context dependence. Particularly interesting cases are the discussions on the semantics of indexicals and the problem of referential descriptions, a topic developed after the theories ofKeith Donnellan.[27]A proper logical theory of formal pragmatics has been developed byCarlo Dalla Pozza, according to which it is possible to connect classical semantics (treating propositional contents as true or false) and intuitionistic semantics (dealing with illocutionary forces). The presentation of a formal treatment of pragmatics appears to be a development of the Fregean idea of assertion sign as formal sign of the act of assertion.
Over the past decade, many probabilistic and Bayesian methods have become very popular in the modelling of pragmatics, of which the most successful framework has been the Rational Speech Act[28]framework developed by Noah Goodman andMichael C. Frank, which has already seen much use in the analysis of metaphor,[29]hyperbole[30]and politeness.[31]In the Rational Speech Act, listeners and speakers both reason about the other's reasoning concerning the literal meaning of the utterances, and as such, the resulting interpretation depends, but is not necessarily determined by the literal truth conditional meaning of an utterance, and so it uses recursive reasoning to pursue a broadly Gricean co-operative ideal.
In the most basic form of the Rational Speech Act, there are three levels of inference; Beginning from the highest level, the pragmatic listenerL1{\displaystyle L_{1}}will reason about the pragmatic speakerS1{\displaystyle S_{1}}, and will then infer the likely world states{\displaystyle s}taking into account thatS1{\displaystyle S_{1}}has deliberately chosen to produce utteranceu{\displaystyle u}, whileS1{\displaystyle S_{1}}chooses to produce utteranceu{\displaystyle u}by reasoning about how the literal listenerL0{\displaystyle L_{0}}will understand the literal meaning ofu{\displaystyle u}and so will attempt to maximise the chances thatL0{\displaystyle L_{0}}will correctly infer the world states{\displaystyle s}. As such, a simple schema of the Rational Speech Act reasoning hierarchy can be formulated for use in a reference game such that:[32]
L1:PL1(s|u)∝PS1(u|s)⋅P(s)S1:PS1(u|s)∝exp(αUS1(u;s))L0:PLO(s|u)∝[[u]](s)⋅P(s){\displaystyle {\begin{aligned}&L_{1}:P_{L_{1}}(s|u)\propto P_{S_{1}}(u|s)\cdot P(s)\\&S_{1}:P_{S_{1}}(u|s)\propto \exp(\alpha U_{S_{1}}(u;s))\\&L_{0}:P_{L_{O}}(s|u)\propto [\![u]\!](s)\cdot P(s)\end{aligned}}}
Pragmatics (more specifically, Speech Act Theory's notion of theperformative) underpinsJudith Butler's theory ofgender performativity. InGender Trouble, they claim that gender and sex are not natural categories, but socially constructed roles produced by "reiterative acting."
InExcitable Speechthey extend
their theory ofperformativitytohate speechandcensorship, arguing that censorship necessarily strengthens any discourse it tries to suppress and therefore, since the state has sole power to define hate speech legally, it is the state that makes hate speech performative.
Jacques Derridaremarked that some work done under Pragmatics aligned well with the program he outlined in his bookOf Grammatology.
Émile Benvenisteargued that thepronouns"I" and "you" are fundamentally distinct from other pronouns because of their role in creating thesubject.
Gilles DeleuzeandFélix Guattaridiscuss linguistic pragmatics in the fourth chapter ofA Thousand Plateaus("November 20, 1923--Postulates of Linguistics"). They draw three conclusions from Austin: (1) Aperformative utterancedoes not communicate information about an act second-hand, but it is the act; (2) Every aspect of language ("semantics, syntactics, or even phonematics") functionally interacts with pragmatics; (3) There is no distinction between language and speech. This last conclusion attempts to refuteSaussure'sdivision betweenlangueandparoleandChomsky'sdistinction betweendeep structure and surface structuresimultaneously.[33]
|
https://en.wikipedia.org/wiki/Pragmatics
|
Kevin Warwick(born 9 February 1954) is an English engineer and Deputy Vice-Chancellor (Research) atCoventry University.[8]He is known for his studies ondirect interfacesbetween computer systems and the humannervous system, and has also done research concerningrobotics.[9][10]
Kevin Warwick was born in 1954 inKeresley, Coventry, England,[11]and was raised in the nearby village ofRyton-on-Dunsmore,Warwickshire. His family attended a Methodist church but soon he began doubting the existence of God.[12]He attendedLawrence Sheriff SchoolinRugby, Warwickshire, where he was a contemporary of actorArthur Bostrom. He left school at the age of 16 to start anapprenticeshipwithBritish Telecom. In 1976, he was granted hisfirst degreeatAston University, followed by aPhDdegree and a research job atImperial College London.
He took up positions atSomerville Collegein Oxford,Newcastle University, theUniversity of Warwick, and theUniversity of Reading, before relocating toCoventry Universityin 2014.
Warwick is aChartered Engineer(CEng), aFellow of the Institution of Engineering and Technology(FIET) and a Fellow of theCity and Guilds of London Institute(FCGI). He is Visiting Professor at theCzech Technical University in Prague, theUniversity of Strathclyde,Bournemouth University, and the University of Reading, and in 2004 he was SeniorBeckman Fellowat theUniversity of Illinoisin the United States. He is also on the Advisory Boards of the Instinctive Computing Laboratory atCarnegie Mellon University,[13]and the Centre for Intermedia at theUniversity of Exeter.[14]
By the age of 40, Warwick had been awarded aDScdegree by both Imperial College London and theCzech Academy of Sciencesin Prague, for his research output in two entirely unrelated areas. He has received theIETAchievement Medal, theIET Mountbatten Medal, and in 2011 theEllison-Cliffe Medalfrom theRoyal Society of Medicine.[15]In 2000, Warwick presented theRoyal Institution Christmas Lectures, entitledThe Rise of Robots.[16]
Warwick performs research inartificial intelligence,biomedical engineering,control systemsandrobotics. Much of Warwick's early research was in the area ofdiscrete timeadaptive control. He introduced the firststate spacebasedself-tuningcontroller[17]and unified discrete time state space representations ofARMAmodels.[18]He has also contributed to mathematics,[19]power engineering[20]andmanufacturing production machinery.[21]
Warwick directed a research project funded by theEngineering and Physical Sciences Research Council(EPSRC), which investigated the use ofmachine learningand artificial intelligence (AI) techniques to suitably stimulate and translate patterns of electrical activity from livingcultured neural networksto use the networks for the control of mobile robots.[22]Hence the behaviour process for each robot was effectively provided by a biological brain.
Previously, Warwick helped to develop agenetic algorithmnamed Gershwyn, which was able to exhibit creativity in producing popular songs, learning what makes a hit record by listening to examples of previous successful songs.[23]Gershwyn appeared on BBC'sTomorrow's World, having been successfully used to mix music for Manus, a group consisting of the four younger brothers ofElvis Costello.
Another of Warwick's projects involving AI was the robot head, Morgui. The head, which contained five "senses" (vision,sound,infrared,ultrasoundandradar), was used to investigate sensor data fusion. It was X-rated by the University of Reading Research and Ethics Committee due to its image storage capabilities—anyone under the age of 18 who wished to interact with the robot had to obtain parental approval.[24]
Warwick has very outspoken opinions about the future, particularly with respect to AI and its effect on the human species. He argues that humanity will need to use technology to enhance itself to avoid being overtaken by machines.[25]He states that many human limitations, such assensorimotorabilities, can be outperformed by machines, and he has said on record that he wants to gain these abilities: "There is no way I want to stay a mere human."[26]
Warwick directed the University of Reading team in a number of European Community projects such as: FIDIS (Future of Identity in the Information Society), researching the future of identity; and ETHICBOTS and RoboLaw, both of which considered theethical aspectsof robots andcyborgs.[27]
Warwick's topics of interest have many ethical implications, some due to hishuman enhancementexperiments.[28]The ethical dilemmas of his research are used by theInstitute of Physicsas a case study[29]for schoolchildren and science teachers as a part of their formal Advanced level and GCSE studies. His work has also been discussed by the USAPresident's Council on Bioethicsand the USA President's Panel on Forward Engagements.[30]He is a member of theNuffield Council on BioethicsWorking Party onNovel Neurotechnologies.[31]
Along withTipu Azizand his team atJohn Radcliffe Hospital, Oxford, andJohn Steinof the University of Oxford, Warwick is helping to design the next generation ofdeep brain stimulationforParkinson's disease.[32]Instead of stimulating the brain all the time, the goal is for the device to predict when stimulation is needed and to apply the signals prior to any tremors occurring, thereby stopping tremors before they start.[33]Recent results have also shown that it is possible to identify different types of Parkinson's Disease.[34]
Warwick has directed a number of projects intended to interest schoolchildren in the technology with which he is involved. In 2000, he received theEPSRCMillennium Award for his Schools Robot League. In 2007, 16 school teams were involved in a project to design a humanoid robot to dance and then complete an assault course, with the final competition staged at theScience Museum, London. The project, entitled 'Androids Advance' was funded by EPSRC and was presented as a news item by Chinese television.[35]
Warwick contributes significantly to thepublic understanding of scienceby giving regular public lectures, participating with radio programmes, and through popular writing. He has appeared in numerous television documentary programmes onAI, robotics and the role of science fiction in science, such asHow William Shatner Changed the World,Future FantasticandExplorations.[36][37]He also appeared in theRay Kurzweil-inspired movieTranscendent Manalong withWilliam Shatner,Colin Powell, andStevie Wonder. He has guested on several television talk shows, includingLate Night with Conan O'Brien,Først & sist,Sunday BrunchandRichard & Judy.[37]He has appeared on the cover of a number of magazines, for example the February 2000 edition ofWired.[38]
In 2005, Warwick was the subject of anearly day motiontabled by members of theUK Parliament, in which he was congratulated for his work in attracting students to science and for teaching "in a way that makes the subject interesting and relevant so that more students will want to develop a career in science."[39]
In 2009, Warwick was interviewed about his work in cybernetics for two documentary features on the DVD release of the 1985Doctor WhostoryAttack of the Cybermen.[40]He was also an interview subject for the televised lectureThe Science of Doctor Whoin 2013.
In 2013, Warwick appeared as a guest on BBC Radio 4'sThe Museum of CuriositywithRobert LlewellynandCleo Rocos.[41]In 2014, he appeared on BBC Radio 4'sMidweekwithLibby Purves,Roger BannisterandRachael Stirling.[42]
Warwick's claims that robots can program themselves to avoid each other while operating in a group raise the issue ofself-organisation. In particular, the works ofFrancisco VarelaandHumberto Maturana, once purely speculative have now become immediately relevant with respect tosynthetic intelligence.
Cyborg-type systems, if they are to survive, need to be not onlyhomeostatic(meaning that they are able to preserve stable internal conditions in various environments) but also adaptive. Testing the claims of Varela and Maturana using synthetic devices is the more serious concern in the discussion about Warwick and those involved in similar research. "Pulling the plug" on independent devices cannot be as simple as it appears, because if the device displays sufficient intelligence, and assumes a diagnostic and prognostic stature, we may ultimately one day be forced to decide between what it could be telling us as counterintuitive (but correct) and our impulse to disconnect because of our limited and "intuitive" perceptions.
Warwick's robots seemed to exhibit behaviour not anticipated by the research, one such robot "committing suicide" because it could not cope with its environment.[43]In a more complex setting, it may be asked whether a "natural selection" might be possible, neural networks being the major operative.
The 1999 edition of theGuinness Book of Recordsrecorded that Warwick performed the first robot learning experiment using the Internet.[44]One robot, with anartificial neural networkbrain at the University of Reading in the UK, learned how to move around without bumping into things. It then taught, via the Internet, another robot atSUNY BuffaloinNew York Stateto behave in the same way.[45]The robot in the US was therefore not taught or programmed by a human, but rather by another robot based on what it had itself learnt.[46]
Hissing Sid was a robot cat that Warwick took on aBritish Councillecture tour of Russia, where he presented it in lectures at such places asMoscow State University. The robot was put together as a student project; its name came from the noise made by thepneumatic actuatorsused to drive its legs when walking. Hissing Sid also appeared on BBC TV'sBlue Peter, but became better known when it was refused a ticket byBritish Airwayson the grounds that they did not allow animals in the cabin.[47]
Warwick was also responsible for a robotic "magic chair" (based on theSCARA-form UMI RTX arm)[48]used on BBC TV'sJim'll Fix It. The chair provided the show's hostJimmy Savilewith tea and stored Jim'll Fix It badges for him to hand out to guests.[49]Warwick appeared on the programme himself for a Fix-it involving robots.[37]
Warwick was also involved in the development of the "Seven Dwarves" robots, a version of which was sold in kit form as "Cybot" on the cover ofReal Robotsmagazine in 2001. The magazine series guided its readers through the stages of building and programming Cybot, an artificially intelligent robot capable of making its own decisions and thinking for itself.[50]
Probably the most famous research undertaken by Warwick—and the origin of the nickname "Captain Cyborg"[4][5][6]given to him byThe Register—is the set of experiments known as Project Cyborg, in which an array was implanted into his arm, with the goal of him "becoming acyborg".[51]
The first stage of Project Cyborg, which began on 24 August 1998, involved a simpleRFIDtransmitter being implanted beneath Warwick's skin, which was used to control doors, lights, heaters, and other computer-controlled devices based on his proximity.[52]He explained that the main purpose of this experiment was to test the limits of what the body would accept, and how easy it would be to receive a meaningful signal from the microprocessor.[53]
The second stage of the research involved a more complex neural interface, designed and built especially for the experiment by Dr.Mark Gassonand his team at the University of Reading. This device consisted of aBrainGatesensor, a silicon square about 3mm wide, connected to an external "gauntlet" that housed supporting electronics. It was implanted under local anaesthetic on 14 March 2002 at theRadcliffe InfirmaryinOxford, where it was interfaced directly into Warwick's nervous system via themedian nervein his left wrist. Themicroelectrode arraythat was inserted contained 100electrodes, each the width of a human hair, of which 25 could be accessed at any one time, whereas the nerve that was being monitored carries many times that number of signals. The experiment proved successful, and the output signals were detailed enough to enable arobot arm, developed by Warwick's colleague Dr.Peter Kyberd, to mimic the actions of Warwick's own arm.[51][54]
By means of the implant, Warwick's nervous system was connected to the Internet atColumbia University, New York. From there he was able to control the robot arm at the University of Reading and obtain feedback from sensors in the finger tips. He also successfully connectedultrasonic sensorson a baseball cap and experienced a form of extrasensory input.[55]
In a highly publicised extension to the experiment, a simpler array was implanted into the arm of Warwick's wife, with the ultimate aim of one day creating a form oftelepathyorempathyusing the Internet to communicate the signal over huge distances. This experiment resulted in the first direct and purely electronic communication between the nervous systems of two humans.[56]Finally, the effect of the implant on Warwick's hand function was measured using theUniversity of Southampton's Hand Assessment Procedure (SHAP).[57]There was a fear that directly interfacing with the nervous system might cause some form of damage or interference, but no measurable side effect (nor any sign of rejection) was encountered.
Warwick and his colleagues claim that the Project Cyborg research could result in new medical tools for treating patients with damage to the nervous system, as well as assisting the more ambitious enhancements Warwick advocates. Sometranshumanistseven speculate that similar technologies could be used for technology-facilitated telepathy.[58]
A controversy began in August 2002, shortly after theSoham murders, when Warwick reportedly offered to implant atracking deviceinto an 11-year-old girl as an anti-abduction measure. The plan produced a mixed reaction, with endorsement from many worried parents but ethical concerns from children's societies.[59]As a result, the idea did not go ahead.
Anti-theft RFID chips are common in jewellery or clothing in some Latin American countries due to a high abduction rate,[60]and the companyVeriChipannounced plans in 2001 to expand its line of available medical information implants,[61]to beGPStrackable when combined with a separate GPS device.[62][63]
Warwick participated as aTuring Interrogatoron two occasions, judging machines in the 2001 and 2006Loebner Prizecompetitions, platforms for an "imitation game" as devised byAlan Turing. The 2001 Prize, held at theLondon Science Museum, featured Turing's "jury service" or one-to-oneTuring testsand was won byA.L.I.C.E.The 2006 contest staged "parallel-paired" Turing tests atUniversity College Londonand the winner wasRollo Carpenter. Warwick co-organised the 2008 Loebner Prize at the University of Reading, which also featured parallel-paired Turing tests.[64]
In 2012, he co-organised with Huma Shah a series of Turing tests held atBletchley Park. According to Warwick, the tests strictly adhered to the statements made by Alan Turing in his papers. Warwick himself participated in the tests as a hidden human.[65]Results of the tests were discussed in a number of academic papers.[66][67]One paper, entitled "Human Misidentification in Turing Tests", became one of the top three most-downloaded papers in theJournal of Experimental and Theoretical Artificial Intelligence.
In June 2014, Warwick helped Shah stage a series of Turing tests to mark the 60th anniversary of Alan Turing's death. The event was performed at theRoyal Society, London. Warwick regarded the winning chatbot, "Eugene Goostman", as having "passed the Turing test for the first time" by fooling a third of the event's judges into making an incorrect identification, and termed this a "milestone".[68]A paper containing all of the transcripts involving Eugene Goostman entitled "Can Machines Think? A Report on Turing Test Experiments at the Royal Society", has also become one of the top three most-downloaded papers in theJournal of Experimental and Theoretical Artificial Intelligence.[69]
Warwick was criticised in the context of the 2014 Royal Society event, where he claimed that software program Eugene Goostman had passed the Turing test on the basis of its performance. The software successfully convinced over 30% of the judges who could not identify it as being a machine, on the basis of a five-minute text chat. Critics stated that the software's claim of being a young non-native English speaker weakened the spirit of the test, as any grammatical and semantic inconsistencies could be excused as a consequence of limited proficiency in the English language.[70][71][72][73]Some critics also claimed that the software's performance had been exceeded by other programs in the past.[70][71]However, the 2014 tests were entirely unrestricted in terms of discussion topics, whereas the previous tests referenced by the critics had been limited to very specific subject areas. Additionally, Warwick was criticised by editor and entrepreneurMike Masnickfor exaggerating the significance of the Eugene Goostman program to the press.[71]
Warwick was a member of the 2001Higher Education Funding Council for England(unit 29)Research Assessment Exercisepanel onElectrical and Electronic Engineeringand was Deputy chairman for the same panel (unit 24) in 2008.[74]In March 2009, he was cited as being the inspiration of National Young Scientist of the Year, Peter Hatfield.[75]
Warwick presented theRoyal Institution Christmas Lecturesin December 2000, entitledRise of the Robots. Although the lectures were well received by some,[76]British computer scientistSimon Coltoncomplained about the choice of Warwick prior to his appearance. He claimed that Warwick "is not a spokesman for our subject" (Artificial Intelligence) and "allowing him influence through the Christmas lectures is a danger to the public perception of science".[77]In response to Warwick's claims that computers could be creative, Colton, who is a Professor of Computational Creativity, also said: "the AI community has done real science to reclaim words such as creativity and emotion which they claim computers will never have".[78]Subsequent letters were generally positive; Ralph Rayner wrote: "With my youngest son, I attended all of the lectures and found them balanced and thought-provoking. They were not sensationalist. I applaud Warwick for his lectures".[79]
Warwick received the Future Health Technology Award in 2000,[80]and was presented with theInstitution of Engineering and Technology(IET) Achievement Medal in 2004.[81]In 2008, he was awarded theMountbatten Medal.[82]In 2009 he received theMarcellin Champagnataward from Universidad MaristaGuadalajaraand theGolden Eurydice Award.[83]In 2011 he received theEllison-Cliffe Medalfrom theRoyal Society of Medicine.[84]In 2014, he was elected to the membership of theEuropean Academy of Sciences and Arts.[85]In 2018 Warwick was inducted into theInternational Academy for Systems and Cybernetic Sciences[86]and in 2020 he was awarded an Honorary Fellowship of theCybernetics Society.[87]
He is the recipient of tenhonorarydoctorates, these being from Aston University,[88]Coventry University,[8][89]Robert Gordon University,[93]Bradford University,[94][95]University of Bedfordshire,[89]Portsmouth University,[96]Kingston University,[97]Ss. Cyril and Methodius University of Skopje,[98]Edinburgh Napier University,[102]andGalgotias University.[103][104]
Warwick has both his critics and endorsers, some of whom describe him as a "maverick".[105]Others see his work as "not very scientific" and more like "entertainment", whereas some regard him as "an extraordinarily creative experimenter", his presentations as "awesome" and his work as "profound".[106][107]
Warwick has written several books, articles and papers. A selection of his books:
Lectures (inaugural and keynote lectures):
Warwick is a regular presenter at the annualCareers Scotland Space School,University of Strathclyde.
He appeared at the 2009World Science Festival[118]withMary McDonnell,Nick Bostrom,Faith SalieandHod Lipson.
|
https://en.wikipedia.org/wiki/Kevin_Warwick
|
All definitions tacitly require thehomogeneous relationR{\displaystyle R}betransitive: for alla,b,c,{\displaystyle a,b,c,}ifaRb{\displaystyle aRb}andbRc{\displaystyle bRc}thenaRc.{\displaystyle aRc.}A term's definition may require additional properties that are not listed in this table.
Inmathematics, abinary relationassociates some elements of onesetcalled thedomainwith some elements of another set called thecodomain.[1]Precisely, a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}is a set ofordered pairs(x,y){\displaystyle (x,y)}, wherex{\displaystyle x}is an element ofX{\displaystyle X}andy{\displaystyle y}is an element ofY{\displaystyle Y}.[2]It encodes the common concept of relation: an elementx{\displaystyle x}isrelatedto an elementy{\displaystyle y},if and only ifthe pair(x,y){\displaystyle (x,y)}belongs to the set of ordered pairs that defines the binary relation.
An example of a binary relation is the "divides" relation over the set ofprime numbersP{\displaystyle \mathbb {P} }and the set ofintegersZ{\displaystyle \mathbb {Z} }, in which each primep{\displaystyle p}is related to each integerz{\displaystyle z}that is amultipleofp{\displaystyle p}, but not to an integer that is not amultipleofp{\displaystyle p}. In this relation, for instance, the prime number2{\displaystyle 2}is related to numbers such as−4{\displaystyle -4},0{\displaystyle 0},6{\displaystyle 6},10{\displaystyle 10}, but not to1{\displaystyle 1}or9{\displaystyle 9}, just as the prime number3{\displaystyle 3}is related to0{\displaystyle 0},6{\displaystyle 6}, and9{\displaystyle 9}, but not to4{\displaystyle 4}or13{\displaystyle 13}.
Binary relations, and especiallyhomogeneous relations, are used in many branches of mathematics to model a wide variety of concepts. These include, among others:
Afunctionmay be defined as a binary relation that meets additional constraints, for example, it maps only one element in its domain to an element of its codomain (while there exist binary relations as one-to-many mappings).[3]Binary relations are also heavily used incomputer science.
A binary relation over setsX{\displaystyle X}andY{\displaystyle Y}is an element of thepower setofX×Y.{\displaystyle X\times Y.}Since the latter set is ordered byinclusion(⊆{\displaystyle \subseteq }), each relation has a place in thelatticeof subsets ofX×Y.{\displaystyle X\times Y.}A binary relation is called ahomogeneous relationwhenX=Y{\displaystyle X=Y}. A binary relation is also called aheterogeneous relationwhen it is not necessary thatX=Y{\displaystyle X=Y}.
Since relations are sets, they can be manipulated using set operations, includingunion,intersection, andcomplementation, and satisfying the laws of analgebra of sets. Beyond that, operations like theconverseof a relation and thecomposition of relationsare available, satisfying the laws of acalculus of relations, for which there are textbooks byErnst Schröder,[4]Clarence Lewis,[5]andGunther Schmidt.[6]A deeper analysis of relations involves decomposing them into subsets calledconcepts, and placing them in acomplete lattice.
In some systems ofaxiomatic set theory, relations are extended toclasses, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such asRussell's paradox.
A binary relation is the most studied special casen=2{\displaystyle n=2}of ann{\displaystyle n}-ary relationover setsX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}, which is a subset of theCartesian productX1×⋯×Xn.{\displaystyle X_{1}\times \cdots \times X_{n}.}[2]
Given setsX{\displaystyle X}andY{\displaystyle Y}, the Cartesian productX×Y{\displaystyle X\times Y}is defined as{(x,y)∣x∈Xandy∈Y},{\displaystyle \{(x,y)\mid x\in X{\text{ and }}y\in Y\},}and its elements are calledordered pairs.
Abinary relationR{\displaystyle R}over setsX{\displaystyle X}andY{\displaystyle Y}is a subset ofX×Y.{\displaystyle X\times Y.}[2][7]The setX{\displaystyle X}is called thedomain[2]orset of departureofR{\displaystyle R}, and the setY{\displaystyle Y}thecodomainorset of destinationofR{\displaystyle R}. In order to specify the choices of the setsX{\displaystyle X}andY{\displaystyle Y}, some authors define abinary relationorcorrespondenceas an ordered triple(X,Y,G){\displaystyle (X,Y,G)}, whereG{\displaystyle G}is a subset ofX×Y{\displaystyle X\times Y}called thegraphof the binary relation. The statement(x,y)∈R{\displaystyle (x,y)\in R}reads "x{\displaystyle x}isR{\displaystyle R}-related toy{\displaystyle y}" and is denoted byxRy{\displaystyle xRy}.[4][5][6][note 1]Thedomain of definitionoractive domain[2]ofR{\displaystyle R}is the set of allx{\displaystyle x}such thatxRy{\displaystyle xRy}for at least oney{\displaystyle y}. Thecodomain of definition,active codomain,[2]imageorrangeofR{\displaystyle R}is the set of ally{\displaystyle y}such thatxRy{\displaystyle xRy}for at least onex{\displaystyle x}. ThefieldofR{\displaystyle R}is the union of its domain of definition and its codomain of definition.[9][10][11]
WhenX=Y,{\displaystyle X=Y,}a binary relation is called ahomogeneous relation(orendorelation). To emphasize the fact thatX{\displaystyle X}andY{\displaystyle Y}are allowed to be different, a binary relation is also called aheterogeneous relation.[12][13][14]The prefixheterois from the Greek ἕτερος (heteros, "other, another, different").
A heterogeneous relation has been called arectangular relation,[14]suggesting that it does not have the square-like symmetry of ahomogeneous relation on a setwhereA=B.{\displaystyle A=B.}Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "... a variant of the theory has evolved that treats relations from the very beginning asheterogeneousorrectangular, i.e. as relations where the normal case is that they are relations between different sets."[15]
The termscorrespondence,[16]dyadic relationandtwo-place relationare synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian productX×Y{\displaystyle X\times Y}without reference toX{\displaystyle X}andY{\displaystyle Y}, and reserve the term "correspondence" for a binary relation with reference toX{\displaystyle X}andY{\displaystyle Y}.[citation needed]
In a binary relation, the order of the elements is important; ifx≠y{\displaystyle x\neq y}thenyRx{\displaystyle yRx}can be true or false independently ofxRy{\displaystyle xRy}. For example,3{\displaystyle 3}divides9{\displaystyle 9}, but9{\displaystyle 9}does not divide3{\displaystyle 3}.
IfR{\displaystyle R}andS{\displaystyle S}are binary relations over setsX{\displaystyle X}andY{\displaystyle Y}thenR∪S={(x,y)∣xRyorxSy}{\displaystyle R\cup S=\{(x,y)\mid xRy{\text{ or }}xSy\}}is theunion relationofR{\displaystyle R}andS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}.
The identity element is the empty relation. For example,≤{\displaystyle \leq }is the union of < and =, and≥{\displaystyle \geq }is the union of > and =.
IfR{\displaystyle R}andS{\displaystyle S}are binary relations over setsX{\displaystyle X}andY{\displaystyle Y}thenR∩S={(x,y)∣xRyandxSy}{\displaystyle R\cap S=\{(x,y)\mid xRy{\text{ and }}xSy\}}is theintersection relationofR{\displaystyle R}andS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}.
The identity element is the universal relation. For example, the relation "is divisible by 6" is the intersection of the relations "is divisible by 3" and "is divisible by 2".
IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}, andS{\displaystyle S}is a binary relation over setsY{\displaystyle Y}andZ{\displaystyle Z}thenS∘R={(x,z)∣there existsy∈Ysuch thatxRyandySz}{\displaystyle S\circ R=\{(x,z)\mid {\text{ there exists }}y\in Y{\text{ such that }}xRy{\text{ and }}ySz\}}(also denoted byR;S{\displaystyle R;S}) is thecomposition relationofR{\displaystyle R}andS{\displaystyle S}overX{\displaystyle X}andZ{\displaystyle Z}.
The identity element is the identity relation. The order ofR{\displaystyle R}andS{\displaystyle S}in the notationS∘R,{\displaystyle S\circ R,}used here agrees with the standard notational order forcomposition of functions. For example, the composition (is parent of)∘{\displaystyle \circ }(is mother of) yields (is maternal grandparent of), while the composition (is mother of)∘{\displaystyle \circ }(is parent of) yields (is grandmother of). For the former case, ifx{\displaystyle x}is the parent ofy{\displaystyle y}andy{\displaystyle y}is the mother ofz{\displaystyle z}, thenx{\displaystyle x}is the maternal grandparent ofz{\displaystyle z}.
IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}thenRT={(y,x)∣xRy}{\displaystyle R^{\textsf {T}}=\{(y,x)\mid xRy\}}is theconverse relation,[17]also calledinverse relation,[18]ofR{\displaystyle R}overY{\displaystyle Y}andX{\displaystyle X}.
For example,={\displaystyle =}is the converse of itself, as is≠{\displaystyle \neq }, and<{\displaystyle <}and>{\displaystyle >}are each other's converse, as are≤{\displaystyle \leq }and≥.{\displaystyle \geq .}A binary relation is equal to its converse if and only if it issymmetric.
IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}thenR¯={(x,y)∣¬xRy}{\displaystyle {\bar {R}}=\{(x,y)\mid \neg xRy\}}(also denoted by¬R{\displaystyle \neg R}) is thecomplementary relationofR{\displaystyle R}overX{\displaystyle X}andY{\displaystyle Y}.
For example,={\displaystyle =}and≠{\displaystyle \neq }are each other's complement, as are⊆{\displaystyle \subseteq }and⊈{\displaystyle \not \subseteq },⊇{\displaystyle \supseteq }and⊉{\displaystyle \not \supseteq },∈{\displaystyle \in }and∉{\displaystyle \not \in }, and fortotal ordersalso<{\displaystyle <}and≥{\displaystyle \geq }, and>{\displaystyle >}and≤{\displaystyle \leq }.
The complement of theconverse relationRT{\displaystyle R^{\textsf {T}}}is the converse of the complement:RT¯=R¯T.{\displaystyle {\overline {R^{\mathsf {T}}}}={\bar {R}}^{\mathsf {T}}.}
IfX=Y,{\displaystyle X=Y,}the complement has the following properties:
IfR{\displaystyle R}is a binaryhomogeneous relationover a setX{\displaystyle X}andS{\displaystyle S}is a subset ofX{\displaystyle X}thenR|S={(x,y)∣xRyandx∈Sandy∈S}{\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S{\text{ and }}y\in S\}}is therestriction relationofR{\displaystyle R}toS{\displaystyle S}overX{\displaystyle X}.
IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}and ifS{\displaystyle S}is a subset ofX{\displaystyle X}thenR|S={(x,y)∣xRyandx∈S}{\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S\}}is theleft-restriction relationofR{\displaystyle R}toS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}.[clarification needed]
If a relation isreflexive, irreflexive,symmetric,antisymmetric,asymmetric,transitive,total,trichotomous, apartial order,total order,strict weak order,total preorder(weak order), or anequivalence relation, then so too are its restrictions.
However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation "x{\displaystyle x}is parent ofy{\displaystyle y}" to females yields the relation "x{\displaystyle x}is mother of the womany{\displaystyle y}"; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother.
Also, the various concepts ofcompleteness(not to be confused with being "total") do not carry over to restrictions. For example, over thereal numbersa property of the relation≤{\displaystyle \leq }is that everynon-emptysubsetS⊆R{\displaystyle S\subseteq \mathbb {R} }with anupper boundinR{\displaystyle \mathbb {R} }has aleast upper bound(also called supremum) inR.{\displaystyle \mathbb {R} .}However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation≤{\displaystyle \leq }to the rational numbers.
A binary relationR{\displaystyle R}over setsX{\displaystyle X}andY{\displaystyle Y}is said to becontained ina relationS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}, writtenR⊆S,{\displaystyle R\subseteq S,}ifR{\displaystyle R}is a subset ofS{\displaystyle S}, that is, for allx∈X{\displaystyle x\in X}andy∈Y,{\displaystyle y\in Y,}ifxRy{\displaystyle xRy}, thenxSy{\displaystyle xSy}. IfR{\displaystyle R}is contained inS{\displaystyle S}andS{\displaystyle S}is contained inR{\displaystyle R}, thenR{\displaystyle R}andS{\displaystyle S}are calledequalwrittenR=S{\displaystyle R=S}. IfR{\displaystyle R}is contained inS{\displaystyle S}butS{\displaystyle S}is not contained inR{\displaystyle R}, thenR{\displaystyle R}is said to besmallerthanS{\displaystyle S}, writtenR⊊S.{\displaystyle R\subsetneq S.}For example, on therational numbers, the relation>{\displaystyle >}is smaller than≥{\displaystyle \geq }, and equal to the composition>∘>{\displaystyle >\circ >}.
Binary relations over setsX{\displaystyle X}andY{\displaystyle Y}can be represented algebraically bylogical matricesindexed byX{\displaystyle X}andY{\displaystyle Y}with entries in theBoolean semiring(addition corresponds to OR and multiplication to AND) wherematrix additioncorresponds to union of relations,matrix multiplicationcorresponds to composition of relations (of a relation overX{\displaystyle X}andY{\displaystyle Y}and a relation overY{\displaystyle Y}andZ{\displaystyle Z}),[19]theHadamard productcorresponds to intersection of relations, thezero matrixcorresponds to the empty relation, and thematrix of onescorresponds to the universal relation. Homogeneous relations (whenX=Y{\displaystyle X=Y}) form amatrix semiring(indeed, amatrix semialgebraover the Boolean semiring) where theidentity matrixcorresponds to the identity relation.[20]
While the 2nd example relation is surjective (seebelow), the 1st is not.
Some important types of binary relationsR{\displaystyle R}over setsX{\displaystyle X}andY{\displaystyle Y}are listed below.
Uniqueness properties:
Totality properties (only definable if the domainX{\displaystyle X}and codomainY{\displaystyle Y}are specified):
Uniqueness and totality properties (only definable if the domainX{\displaystyle X}and codomainY{\displaystyle Y}are specified):
If relations over proper classes are allowed:
Certain mathematical "relations", such as "equal to", "subset of", and "member of", cannot be understood to be binary relations as defined above, because their domains and codomains cannot be taken to be sets in the usual systems ofaxiomatic set theory. For example, to model the general concept of "equality" as a binary relation={\displaystyle =}, take the domain and codomain to be the "class of all sets", which is not a set in the usual set theory.
In most mathematical contexts, references to the relations of equality, membership and subset are harmless because they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem is to select a "large enough" setA{\displaystyle A}, that contains all the objects of interest, and work with the restriction=A{\displaystyle =_{A}}instead of={\displaystyle =}. Similarly, the "subset of" relation⊆{\displaystyle \subseteq }needs to be restricted to have domain and codomainP(A){\displaystyle P(A)}(the power set of a specific setA{\displaystyle A}): the resulting set relation can be denoted by⊆A.{\displaystyle \subseteq _{A}.}Also, the "member of" relation needs to be restricted to have domainA{\displaystyle A}and codomainP(A){\displaystyle P(A)}to obtain a binary relation∈A{\displaystyle \in _{A}}that is a set.Bertrand Russellhas shown that assuming∈{\displaystyle \in }to be defined over all sets leads to a contradiction innaive set theory, seeRussell's paradox.
Another solution to this problem is to use a set theory with proper classes, such asNBGorMorse–Kelley set theory, and allow the domain and codomain (and so the graph) to beproper classes: in such a theory, equality, membership, and subset are binary relations without special comment. (A minor modification needs to be made to the concept of the ordered triple(X,Y,G){\displaystyle (X,Y,G)}, as normally a proper class cannot be a member of an ordered tuple; or of course one can identify the binary relation with its graph in this context.)[31]With this definition one can for instance define a binary relation over every set and its power set.
Ahomogeneous relationover a setX{\displaystyle X}is a binary relation overX{\displaystyle X}and itself, i.e. it is a subset of the Cartesian productX×X.{\displaystyle X\times X.}[14][32][33]It is also simply called a (binary) relation overX{\displaystyle X}.
A homogeneous relationR{\displaystyle R}over a setX{\displaystyle X}may be identified with adirected simple graph permitting loops, whereX{\displaystyle X}is the vertex set andR{\displaystyle R}is the edge set (there is an edge from a vertexx{\displaystyle x}to a vertexy{\displaystyle y}if and only ifxRy{\displaystyle xRy}).
The set of all homogeneous relationsB(X){\displaystyle {\mathcal {B}}(X)}over a setX{\displaystyle X}is thepower set2X×X{\displaystyle 2^{X\times X}}which is aBoolean algebraaugmented with theinvolutionof mapping of a relation to itsconverse relation. Consideringcomposition of relationsas abinary operationonB(X){\displaystyle {\mathcal {B}}(X)}, it forms asemigroup with involution.
Some important properties that a homogeneous relationR{\displaystyle R}over a setX{\displaystyle X}may have are:
Apartial orderis a relation that is reflexive, antisymmetric, and transitive. Astrict partial orderis a relation that is irreflexive, asymmetric, and transitive. Atotal orderis a relation that is reflexive, antisymmetric, transitive and connected.[37]Astrict total orderis a relation that is irreflexive, asymmetric, transitive and connected.
Anequivalence relationis a relation that is reflexive, symmetric, and transitive.
For example, "x{\displaystyle x}dividesy{\displaystyle y}" is a partial, but not a total order onnatural numbersN,{\displaystyle \mathbb {N} ,}"x<y{\displaystyle x<y}" is a strict total order onN,{\displaystyle \mathbb {N} ,}and "x{\displaystyle x}is parallel toy{\displaystyle y}" is an equivalence relation on the set of all lines in theEuclidean plane.
All operations defined in section§ Operationsalso apply to homogeneous relations.
Beyond that, a homogeneous relation over a setX{\displaystyle X}may be subjected to closure operations like:
Developments inalgebraic logichave facilitated usage of binary relations. Thecalculus of relationsincludes thealgebra of sets, extended bycomposition of relationsand the use ofconverse relations. The inclusionR⊆S,{\displaystyle R\subseteq S,}meaning thataRb{\displaystyle aRb}impliesaSb{\displaystyle aSb}, sets the scene in alatticeof relations. But sinceP⊆Q≡(P∩Q¯=∅)≡(P∩Q=P),{\displaystyle P\subseteq Q\equiv (P\cap {\bar {Q}}=\varnothing )\equiv (P\cap Q=P),}the inclusion symbol is superfluous. Nevertheless, composition of relations and manipulation of the operators according toSchröder rules, provides a calculus to work in thepower setofA×B.{\displaystyle A\times B.}
In contrast to homogeneous relations, thecomposition of relationsoperation is only apartial function. The necessity of matching target to source of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter ofcategory theoryas in thecategory of sets, except that themorphismsof this category are relations. Theobjectsof the categoryRelare sets, and the relation-morphisms compose as required in acategory.[citation needed]
Binary relations have been described through their inducedconcept lattices:
AconceptC⊂R{\displaystyle C\subset R}satisfies two properties:
For a given relationR⊆X×Y,{\displaystyle R\subseteq X\times Y,}the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion⊑{\displaystyle \sqsubseteq }forming apreorder.
TheMacNeille completion theorem(1937) (that any partial order may be embedded in acomplete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices".[38]The decomposition is
Particular cases are considered below:E{\displaystyle E}total order corresponds to Ferrers type, andE{\displaystyle E}identity corresponds to difunctional, a generalization ofequivalence relationon a set.
Relations may be ranked by theSchein rankwhich counts the number of concepts necessary to cover a relation.[39]Structural analysis of relations with concepts provides an approach fordata mining.[40]
The idea of a difunctional relation is to partition objects by distinguishing attributes, as a generalization of the concept of anequivalence relation. One way this can be done is with an intervening setZ={x,y,z,…}{\displaystyle Z=\{x,y,z,\ldots \}}ofindicators. The partitioning relationR=FGT{\displaystyle R=FG^{\textsf {T}}}is acomposition of relationsusingfunctionalrelationsF⊆A×ZandG⊆B×Z.{\displaystyle F\subseteq A\times Z{\text{ and }}G\subseteq B\times Z.}Jacques Riguetnamed these relationsdifunctionalsince the compositionFGT{\displaystyle FG^{\mathsf {T}}}involves functional relations, commonly calledpartial functions.
In 1950 Riguet showed that such relations satisfy the inclusion:[41]
Inautomata theory, the termrectangular relationhas also been used to denote a difunctional relation. This terminology recalls the fact that, when represented as alogical matrix, the columns and rows of a difunctional relation can be arranged as ablock matrixwith rectangular blocks of ones on the (asymmetric) main diagonal.[42]More formally, a relationR{\displaystyle R}onX×Y{\displaystyle X\times Y}is difunctional if and only if it can be written as the union of Cartesian productsAi×Bi{\displaystyle A_{i}\times B_{i}}, where theAi{\displaystyle A_{i}}are a partition of a subset ofX{\displaystyle X}and theBi{\displaystyle B_{i}}likewise a partition of a subset ofY{\displaystyle Y}.[43]
Using the notation{y∣xRy}=xR{\displaystyle \{y\mid xRy\}=xR}, a difunctional relation can also be characterized as a relationR{\displaystyle R}such that whereverx1R{\displaystyle x_{1}R}andx2R{\displaystyle x_{2}R}have a non-empty intersection, then these two sets coincide; formallyx1∩x2≠∅{\displaystyle x_{1}\cap x_{2}\neq \varnothing }impliesx1R=x2R.{\displaystyle x_{1}R=x_{2}R.}[44]
In 1997 researchers found "utility of binary decomposition based on difunctional dependencies indatabasemanagement."[45]Furthermore, difunctional relations are fundamental in the study ofbisimulations.[46]
In the context of homogeneous relations, apartial equivalence relationis difunctional.
Astrict orderon a set is a homogeneous relation arising inorder theory.
In 1951Jacques Riguetadopted the ordering of aninteger partition, called aFerrers diagram, to extend ordering to binary relations in general.[47]
The corresponding logical matrix of a general binary relation has rows which finish with a sequence of ones. Thus the dots of a Ferrer's diagram are changed to ones and aligned on the right in the matrix.
An algebraic statement required for a Ferrers type relation R isRR¯TR⊆R.{\displaystyle R{\bar {R}}^{\textsf {T}}R\subseteq R.}
If any one of the relationsR,R¯,RT{\displaystyle R,{\bar {R}},R^{\textsf {T}}}is of Ferrers type, then all of them are.[48]
SupposeB{\displaystyle B}is thepower setofA{\displaystyle A}, the set of allsubsetsofA{\displaystyle A}. Then a relationg{\displaystyle g}is acontact relationif it satisfies three properties:
Theset membershiprelation,ϵ={\displaystyle \epsilon =}"is an element of", satisfies these properties soϵ{\displaystyle \epsilon }is a contact relation. The notion of a general contact relation was introduced byGeorg Aumannin 1970.[49][50]
In terms of the calculus of relations, sufficient conditions for a contact relation includeCTC¯⊆∋C¯≡C∋C¯¯⊆C,{\displaystyle C^{\textsf {T}}{\bar {C}}\subseteq \ni {\bar {C}}\equiv C{\overline {\ni {\bar {C}}}}\subseteq C,}where∋{\displaystyle \ni }is the converse of set membership (∈{\displaystyle \in }).[51]: 280
Every relationR{\displaystyle R}generates apreorderR∖R{\displaystyle R\backslash R}which is theleft residual.[52]In terms of converse and complements,R∖R≡RTR¯¯.{\displaystyle R\backslash R\equiv {\overline {R^{\textsf {T}}{\bar {R}}}}.}Forming the diagonal ofRTR¯{\displaystyle R^{\textsf {T}}{\bar {R}}}, the corresponding row ofRT{\displaystyle R^{\textsf {T}}}and column ofR¯{\displaystyle {\bar {R}}}will be of opposite logical values, so the diagonal is all zeros. Then
To showtransitivity, one requires that(R∖R)(R∖R)⊆R∖R.{\displaystyle (R\backslash R)(R\backslash R)\subseteq R\backslash R.}Recall thatX=R∖R{\displaystyle X=R\backslash R}is the largest relation such thatRX⊆R.{\displaystyle RX\subseteq R.}Then
Theinclusionrelation Ω on thepower setofU{\displaystyle U}can be obtained in this way from themembership relation∈{\displaystyle \in }on subsets ofU{\displaystyle U}:
Given a relationR{\displaystyle R}, itsfringeis the sub-relation defined asfringe(R)=R∩RR¯TR¯.{\displaystyle \operatorname {fringe} (R)=R\cap {\overline {R{\bar {R}}^{\textsf {T}}R}}.}
WhenR{\displaystyle R}is a partial identity relation, difunctional, or a block diagonal relation, thenfringe(R)=R{\displaystyle \operatorname {fringe} (R)=R}. Otherwise thefringe{\displaystyle \operatorname {fringe} }operator selects a boundary sub-relation described in terms of its logical matrix:fringe(R){\displaystyle \operatorname {fringe} (R)}is the side diagonal ifR{\displaystyle R}is an upper right triangularlinear orderorstrict order.fringe(R){\displaystyle \operatorname {fringe} (R)}is the block fringe ifR{\displaystyle R}is irreflexive (R⊆I¯{\displaystyle R\subseteq {\bar {I}}}) or upper right block triangular.fringe(R){\displaystyle \operatorname {fringe} (R)}is a sequence of boundary rectangles whenR{\displaystyle R}is of Ferrers type.
On the other hand,fringe(R)=∅{\displaystyle \operatorname {fringe} (R)=\emptyset }whenR{\displaystyle R}is adense, linear, strict order.[51]
Given two setsA{\displaystyle A}andB{\displaystyle B}, the set of binary relations between themB(A,B){\displaystyle {\mathcal {B}}(A,B)}can be equipped with aternary operation[a,b,c]=abTc{\displaystyle [a,b,c]=ab^{\textsf {T}}c}wherebT{\displaystyle b^{\mathsf {T}}}denotes theconverse relationofb{\displaystyle b}. In 1953Viktor Wagnerused properties of this ternary operation to definesemiheaps, heaps, and generalized heaps.[53][54]The contrast of heterogeneous and homogeneous relations is highlighted by these definitions:
There is a pleasant symmetry in Wagner's work between heaps, semiheaps, and generalised heaps on the one hand, and groups, semigroups, and generalised groups on the other. Essentially, the various types of semiheaps appear whenever we consider binary relations (and partial one-one mappings) betweendifferentsetsA{\displaystyle A}andB{\displaystyle B}, while the various types of semigroups appear in the case whereA=B{\displaystyle A=B}.
|
https://en.wikipedia.org/wiki/Binary_relation
|
The followingoutlineis provided as an overview of and topical guide to software engineering:
Software engineering– application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance ofsoftware; that is the application ofengineeringtosoftware.[1]
The ACM Computing Classification system is a poly-hierarchical ontology that organizes the topics of the field and can be used in semantic web applications and as a de facto standard classification system for the field. The major section "Software and its Engineering" provides an outline and ontology for software engineering.
Softwareengineers buildsoftware(applications,operating systems,system software) that people use.
Applications influence software engineering by pressuring developers to solve problems in new ways. For example, consumer software emphasizes low cost, medical software emphasizes high quality, and Internet commerce software emphasizes rapid development.
A platform combines computer hardware and an operating system. As platforms grow more powerful and less costly, applications and tools grow more widely available.
Skilled software engineers know a lot ofcomputer scienceincluding what is possible and impossible, and what is easy and hard for software.
Discrete mathematicsis a key foundation ofsoftwareengineering.
Other
Deliverables must be developed for many SE projects. Software engineers rarely make all of these deliverables themselves. They usually cooperate with the writers, trainers, installers, marketers, technical support people, and others who make many of these deliverables.
History of software engineering
Many people made important contributions to SE technologies, practices, or applications.
See also
|
https://en.wikipedia.org/wiki/Outline_of_software_engineering
|
Nice Guys Finish First(BBCHorizontelevision series) is a 1986 documentary byRichard Dawkinswhich discusses selfishness and cooperation, arguing that evolution often favors co-operative behaviour, and focusing especially on thetit for tatstrategy of theprisoner's dilemmagame. The film is approximately 50 minutes long and was produced by Jeremy Taylor.[1]
The twelfth chapter in Dawkins' bookThe Selfish Gene(added in the second edition, 1989) is also namedNice Guys Finish Firstand explores similar material.
In the opening scene,Richard Dawkinsresponds very precisely to what he views as a misrepresentation of his first book,The Selfish Gene. In particular, the response of theright wingfor using it as justification forsocial Darwinismandlaissez-faireeconomics (free-market capitalism). Dawkins has examined this issue throughout his career and focused much of his documentaryThe Genius of Charles Darwinon this very issue.
The concept ofreciprocal altruismis a central theme of this documentary. Additionally, Dawkins examines thetragedy of the commons, and the dilemma that it presents. He uses the large area of common landPort MeadowinOxford, England, which has been battered byovergrazing. This provides an example of the infamoustragedy of the commons. Fourteen academics as well as experts ingame theorysubmitted their own computer programs to compete in a tournament to see who would win in theprisoner's dilemma. The winner wastit for tat, a program based on "equal retaliation", and Dawkins illustrates the four conditions of tit for tat.
In a second trial, this time of over sixty applicants,tit for tatwon again.
This article about a scientificdocumentarywork for radio, television or the internet is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Nice_Guys_Finish_First
|
Incognitive psychology,chunkingis a process by which small individual pieces of a set of information are bound together to create a meaningful whole later on in memory.[1]The chunks, by which the information is grouped, are meant to improve short-term retention of the material, thus bypassing the limited capacity ofworking memoryand allowing the working memory to be more efficient.[2][3][4]A chunk is a collection of basic units that are strongly associated with one another, and have been grouped together and stored in a person's memory. These chunks can be retrieved easily due to their coherent grouping.[5]It is believed that individuals create higher-order cognitive representations of the items within the chunk. The items are more easily remembered as a group than as the individual items themselves. These chunks can be highly subjective because they rely on an individual's perceptions and past experiences, which are linked to the information set. The size of the chunks generally ranges from two to six items but often differs based on language and culture.[6]
According to Johnson (1970), there are four main concepts associated with the memory process of chunking: chunk, memory code, decode and recode.[7]The chunk, as mentioned prior, is a sequence of to-be-remembered information that can be composed of adjacent terms. These items or information sets are to be stored in the same memory code. The process of recoding is where one learns the code for a chunk, and decoding is when the code is translated into the information that it represents.
The phenomenon of chunking as a memory mechanism is easily observed in the way individuals group numbers, and information, in day-to-day life. For example, when recalling a number such as 12101946, if numbers are grouped as 12, 10, and 1946, amnemonicis created for this number as a month, day, and year. It would be stored as December 10, 1946, instead of a string of numbers. Similarly, another illustration of the limited capacity of working memory as suggested by George Miller can be seen from the following example: While recalling a mobile phone number such as 9849523450, we might break this into 98 495 234 50. Thus, instead of remembering 10 separate digits that are beyond the putative "seven plus-or-minus two" memory span, we are remembering four groups of numbers.[8]An entire chunk can also be remembered simply by storing the beginnings of a chunk in the working memory, resulting in the long-term memory recovering the remainder of the chunk.[4]
Amodality effectis present in chunking. That is, the mechanism used to convey the list of items to the individual affects how much "chunking" occurs.
Experimentally, it has been found that auditory presentation results in a larger amount of grouping in the responses of individuals than visual presentation does. Previous literature, such as George Miller'sThe Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information (1956)has shown that the probability of recall of information is greater when the chunking strategy is used.[8]As stated above, the grouping of the responses occurs as individuals place them into categories according to their inter-relatedness based on semantic and perceptual properties. Lindley (1966) showed that since the groups produced have meaning to the participant, this strategy makes it easier for an individual to recall and maintain information in memory during studies and testing.[9]Therefore, when "chunking" is used as a strategy, one can expect a higher proportion of correct recalls.
Various kinds ofmemorytraining systems andmnemonicsinclude training and drills in specially-designed recoding or chunking schemes.[10]Such systems existed before Miller's paper, but there was no convenient term to describe the general strategy and no substantive and reliable research. The term "chunking" is now often used in reference to these systems. As an illustration, patients withAlzheimer's diseasetypically experience working memory deficits; chunking is an effective method to improve patients' verbal working memory performance.[11]Patients with schizophrenia also experience working memory deficits which influence executive function; memory training procedures positively influence cognitive and rehabilitative outcomes.[12]Chunking has been proven to decrease the load on the working memory in many ways. As well as remembering chunked information easier, a person can also recall other non-chunked memories easier due to the benefits chunking has on the working memory.[4]For instance, in one study, participants with more specialized knowledge could reconstruct sequences ofchessmoves because they had larger chunks of procedural knowledge, which means that the level of expertise and the sorting order of the information retrieved is essential in the influence of procedural knowledge chunks retained in short-term memory.[13]Chunking has been shown to have an influence inlinguistics, such as boundary perception.[14]
According to the research conducted by Dirlam (1972), a mathematical analysis was conducted to see what the efficient chunk size is. We are familiar with the size range that chunking holds, but Dirlam (1972) wanted to discover the most efficient chunk size. The mathematical findings have discovered that four or three items in each chunk is the most optimal.[15]
The wordchunkingcomes from a famous 1956 paper byGeorge A. Miller, "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information".[16]At a time wheninformation theorywas beginning to be applied in psychology, Miller observed that some human cognitive tasks fit the model of a "channel capacity" characterized by a roughly constant capacity in bits, but short-term memory did not. A variety of studies could be summarized by saying that short-term memory had a capacity of about "seven plus-or-minus two" chunks. Miller (1956) wrote, "With binary items, the span is about nine and, although it drops to about five withmonosyllabicEnglish words, the difference is far less than the hypothesis of constant information would require (see also,memory span). The span of immediate memory seems to be almost independent of the number of bits per chunk, at least over the range that has been examined to date." Miller acknowledged that "we are not very definite about what constitutes a chunk of information."[8]
Miller (1956) noted that according to this theory, it should be possible to increase short-term memory for low-information-content items effectively by mentally recoding them into a smaller number of high-information-content items. He imagined this process is useful in scenarios such as "a man just beginning to learnradio-telegraphic codehears each dit and dah as a separate chunk. Soon he is able to organize these sounds into letters and then he can deal with the letters as chunks. Then the letters organize themselves as words, which are still larger chunks, and he begins to hear whole phrases." Thus, a telegrapher can effectively "remember" several dozen dits and dahs as a single phrase. Naïve subjects can remember a maximum of only nine binary items, but Miller reports a 1954 experiment in which people were trained to listen to a string of binary digits and (in one case) mentally group them into groups of five, recode each group into a name (for example, "twenty-one" for 10101), and remember the names. With sufficient practice, people found it possible to remember as many as forty binary digits. Miller wrote:
It is a little dramatic to watch a person get 40 binary digits in a row and then repeat them back without error. However, if you think of this merely as a mnemonic trick for extending the memory span, you will miss the more important point that is implicit in nearly all such mnemonic devices. The point is that recoding is an extremely powerful weapon for increasing the amount of information that we can deal with.[8]
Studies have shown that people have better memories when they are trying to remember items with which they are familiar. Similarly, people tend to create familiar chunks. This familiarity allows one to remember more individual pieces of content, and also more chunks as a whole. One well-known chunking study was conducted by Chase and Ericsson, who worked with an undergraduate student, SF, for over two years.[17]They wanted to see if a person's digit span memory could be improved with practice. SF began the experiment with a normal span of 7 digits. SF was a long-distance runner, and chunking strings of digits into race times increased his digit span. By the end of the experiment, his digit span had grown to 80 numbers. A later description of the research inThe Brain-Targeted Teaching Model for 21st Century Schoolsstates that SF later expanded his strategy by incorporating ages and years, but his chunks were always familiar, which allowed him to recall them more easily.[18]Someone who does not have knowledge in the expert domain (e.g. being familiar with mile/marathon times) would have difficulty chunking with race times and ultimately be unable to memorize as many numbers using this method. The idea that a person who does not have knowledge in the expert domain would have difficulty chunking could also be seen in an experiment of novice and expert hikers to see if they could remember different mountain scenes. From this study, it was found that the expert hikers had better recall and recognition of structured stimuli.[19]Another example could be seen with expert musicians in being able to chunk and recall encoded material that best meets the demands they are presented with at any given moment during the performance.[20]
Chunking and memory in chess revisited
Previous research has shown that chunking is an effective tool for enhancing memory capacity due to the nature of grouping individual pieces into larger, more meaningful groups that are easier to remember. Chunking is a popular tool for people who play chess, specifically a master.[21]Chase and Simon (1973a) discovered that the skill levels of chess players are attributed to long-term memory storage and the ability to copy and recollect thousands of chunks. The process helps acquire knowledge at a faster pace. Since it is an excellent tool for enhancing memory, a chess player who utilizes chunking has a higher chance of success. According to Chase and Simon, while re-examining (1973b), an expert chess master is able to access information in long-term memory storage quickly due to the ability to recall chunks. Chunks stored in long-term memory are related to the decision of the movement of board pieces due to obvious patterns.
Chunking models for education
Many years of research has concluded that chunking is a reliable process for gaining knowledge and organization of information. Chunking provides explanation to the behavior of experts, such as a teacher. A teacher can utilize chunking in their classroom as a way to teach the curriculum. Gobet (2005) proposed that teachers can use chunking as a method to segment the curriculum into natural components. A student learns better when focusing on key features of material, so it is important to create the segments to highlight the important information. By understanding the process of how an expert is formed, it is possible to find general mechanisms for learning that can be implemented into classrooms.[22]
Chunking is a method of learning that can be applied in a number of contexts and is not limited to learning verbal material.[23]Karl Lashley, in his classic paper onserial order, argued that the sequential responses that appear to be organized in a linear and flat fashion concealed an underlying hierarchical structure.[24]This was then demonstrated in motor control by Rosenbaum et al. in 1983.[25]Thus sequences can consist of sub-sequences and these can, in turn, consist of sub-sub-sequences. Hierarchical representations of sequences have an advantage over linear representations: They combine efficient local action at low hierarchical levels while maintaining the guidance of an overall structure. While the representation of a linear sequence is simple from a storage point of view, there can be potential problems during retrieval. For instance, if there is a break in the sequence chain, subsequent elements will become inaccessible. On the other hand, a hierarchical representation would have multiple levels of representation. A break in the link between lower-level nodes does not render any part of the sequence inaccessible, since the control nodes (chunk nodes) at the higher level would still be able to facilitate access to the lower-level nodes.
Chunks inmotor learningare identified by pauses between successive actions in Terrace (2001).[26]It is also suggested that during the sequence performance stage (after learning), participants download list items as chunks during pauses. He also argued for an operational definition of chunks suggesting a distinction between the notions of input and output chunks from the ideas of short-term and long-term memory. Input chunks reflect the limitation of working memory during the encoding of new information (how new information is stored in long-term memory), and how it is retrieved during subsequent recall. Output chunks reflect the organization of over-learned motor programs that are generated on-line in working memory. Sakai et al. (2003) showed that participants spontaneously organize a sequence into a number of chunks across a few sets and that these chunks were distinct among participants tested on the same sequence.[27]They also demonstrated that the performance of a shuffled sequence was poorer when the chunk patterns were disrupted than when the chunk patterns were preserved. Chunking patterns also seem to depend on the effectors used.
Perlman found in his series of experiments that tasks that are larger in size and broken down into smaller sections had faster respondents than the task as a large whole. The study suggests that chunking a larger task into a smaller more manageable task can produce a better outcome. The research also found that completing the task in a coherent order rather than swapping from one task to another can also produce a better outcome.[28]
Chunking is used in adults in different ways which can include low-level perceptual features, category membership, semantic relatedness, and statistical co-occurrences between items.[29]Although due to recent studies we are starting to realize that infants also use chunking. They also use different types of knowledges to help them with chunking like conceptual knowledge, spatiotemporal cue knowledge, and knowledge of their social domain.
There have been studies that use different chunking models like PARSER and the Bayesian model. PARSER is a chunking model designed to account for human behavior by implementing psychologically plausible processes of attention, memory, and associative learning.[30]In a recent study, it was determined that these chunking models like PARSER are seen in infants more than chunking models like Bayesian. PARSER is seen more because it is typically endowed with the ability to process up to three chunks simultaneously.[30]
When it comes to infants using their social knowledge they need to use abstract knowledge and subtle cues because they can not create a perception of their social group on their own. Infants can form chunks using shared features or spatial proximity between objects.[31]
Previous research shows that the mechanism of chunking is available in seven-month-old infants.[32]This means that chunking can occur even before the working memory capacity has completely developed. Knowing that the working memory has a very limited capacity, it can be beneficial to utilize chunking. In infants, whose working memory capacity is not completely developed, it can be even more helpful to chunk memories. These studies were done using the violation-of-expectation method and recording the amount of time the infants watched the objects in front of them. Although the experiment showed that infants can use chunking, researchers also concluded that an infant's ability to chunk memories will continue to develop over the next year of their lives.[32]
Working memory appears to store no more than three objects at a time in newborns and early toddlers. A study conducted in 2014,Infants use temporal regularities to chunk objects in memory,[33]allowed for new information and knowledge. This research showed that 14-month-old infants, like adults, can chunk using their knowledge of object categories: they remembered four total objects when an array contained two tokens of two different types (e.g., two cats and two cars), but not when the array contained four tokens of the same type (e.g., four different cats).[33]It demonstrates that newborns may employ spatial closeness to tie representations of particular items into chunks, benefiting memory performance as a result.[34]Despite the fact that newborns' working memory capacity is restricted, they may employ numerous forms of information to tie representations of individual things into chunks, enhancing memory efficiency.[34]
This usage derives from Miller's (1956) idea of chunking as grouping, but the emphasis is now onlong-term memoryrather than only onshort-term memory. A chunk can then be defined as "a collection of elements having strong associations with one another, but weak associations with elements within other chunks".[35]The emphasis of chunking on long-term memory is supported by the idea that chunking only exists in long-term memory, but it assists with reintegration, which is involved in the recall of information in short-term memory. It may be easier to recall information in short-term memory if the information has been represented through chunking in long-term memory. Norris and Kalm (2021) argued that "reintegration can be achieved by treating recall from memory as a process ofBayesian inferencewhereby representations of chunks in LTM (long-term memory) provide the priors that can be used to interpret a degraded representation in STM (short-term memory)".[36]In Bayesian inference, priors refer to the initial beliefs regarding the relative frequency of an event occurring instead of other plausible events occurring. When one who holds the initial beliefs receives more information, one will determine the likelihood of each of the plausible events that could happen and thus predict the specific event that will occur. Chunks in long-term memory are involved in forming the priors, and they assist with determining the likelihood and prediction of the recall of information in short-term memory. For example, if an acronym and its full meaning already exist in long-term memory, the recall of information regarding that acronym will be easier in short-term memory.[36]
Chase and Simon in 1973 and later Gobet, Retschitzki, and de Voogt in 2004 showed that chunking could explain several phenomena linked toexpertisein chess.[35][37]Following a brief exposure to pieces on a chessboard, skilled chess players were able to encode and recall much larger chunks than novice chess players. However, this effect is mediated by specific knowledge of the rules of chess; when pieces were distributed randomly (including scenarios that were not common or allowed in real games), the difference in chunk size between skilled and novice chess players was significantly reduced. Several successful computational models of learning and expertise have been developed using this idea, such asEPAM(Elementary Perceiver and Memorizer) andCHREST(Chunk Hierarchy and Retrieval Structures). Chunking may be demonstrated in the acquisition of a memory skill, which was demonstrated by S. F., an undergraduate student with average memory and intelligence, who increased his digit span from seven to almost 80 within 20 months or after at least 230 hours.[38]S. F. was able to improve his digit span partly through mnemonic associations, which is a form of chunking. S. F. associated digits, which were unfamiliar information to him, with running times, ages, and dates, which were familiar information to him. Ericsson et al. (1980) initially hypothesized that S. F. increased digit span was due to an increase in his short-term memory capacity. However, they rejected this hypothesis when they found that his short-memory capacity was always the same, considering that he "chunked" only three to four digits at once. Furthermore, he never rehearsed more than six digits at once nor rehearsed more than four groups in a supergroup. Lastly, if his short-term memory capacity increased, then he would have shown a greater capacity for the alphabets; he did not.[38]Based on these contradictions, Ericsson et al. (1980) later concluded that S. F. was able to increase his digit span due to "the use of mnemonic associations in long-term memory," which further supports that chunking may exist in short-term memory rather than long-term memory.
Chunking has also been used with models oflanguage acquisition.[39]The use of chunk-based learning in language has been shown to be helpful. Understanding a group of basic words and then giving different categories of associated words to build on comprehension has shown to be an effective way to teach reading and language to children.[40]Research studies have found that adults and infants were able to parse the words of a made-up language when they were exposed to a continuous auditory sequence of words arranged in random order.[41]One of the explanations was that they may parse the words using small chunks that correspond to the made-up language. Subsequent studies have supported that when learning involves statistical probabilities (e.g., transitional probabilities in language), it may be better explained via chunking models. Franco and Destrebecqz (2012) further studied chunking in language acquisition and found that the presentation of a temporal cue was associated with a reliable prediction of the chunking model regarding learning, but the absence of the cue was associated with increased sensitivity to the strength of transitional probabilities.[41]Their findings suggest that the chunking model can only explain certain aspects of learning, specifically language acquisition.
Norris conducted a study in 2020 of chunking and short-term memory recollection, finding that when a chunk is given, it is stored as a single item despite being a relatively large amount of information. This finding suggests that chunks should be less susceptible to decay or interference when they are recalled. The study used visual stimuli where all the items were given simultaneously. Items of two and three were found to be recalled easier than singles, and more singles were recalled when in a group with threes.[42]
Chunking can be a form of data suppression that allows for more information to be stored in short-term memory. Rather than verbal short-memory measured by the number of items stored, Miller (1956) suggested that verbal short-term memory are stored as chunks. Later studies were done to determine if chunking was a form data compression when there is limited space for memory. Chunking works as data compression when it comes to redundant information and it allows for more information to be stored in short-term memory. However, memory capacity may vary.[36]
An experiment was done to see how chunking could be beneficial to patients who had Alzheimer's disease. This study was based on how chunking was used to improve working memory in normal young people. Working memory is impaired in the early stages of Alzheimer's disease which affects the ability to do everyday tasks. It also affects executive control of working memory. It was found that participants who had mild Alzheimer's disease were able to use working memory strategies to enhance verbal and spatial working memory performance.[43]
It has been long thought that chunking can improve working memory. A study was done to see how chunking can improve working memory when it came to symbolic sequences and gating mechanisms. This was done by having 25 participants learn 16 sequences through trial and error. The target was presented alongside a distractor and participants were to identify the target by using right or left buttons on a computer mouse. The final analysis was done on only 19 participants. The results showed that chunking does improve symbolic sequence performance through decreasing cognitive load and real-time strategy.[44]Chunking has proved to be effective in reducing the load on adding items into working memory. Chunking allows more items to be encoded into working memory with more available to transfer into long-term memory.[45]
Chekaf, Cowan, and Mathy (2016)[46]looked at how immediate memory relates to the formation of chunks. In the immediate memory, they came up with a two-factor theory of the formation of chunks. These factors are compressibility and the order of the information. Compressibility refers to making information more compact and condensed. The material is transformed from something complex to something more simplified. Thus, compressibility relates to chunking due to the predictability factor. As for the second factor, the sequence of the information can impact what is being discovered. So the order, along with the process of compressing the material, may increase the probability that chunking occurs. These two factors interact with one another and matter in the concept of chunking. Chekaf, Cowan, and Mathy (2016)[46]gave an example where the material "1,2,3,4” can be compressed to "numbers one through four." However, if the material was presented as "1,3,2,4” you cannot compress it because the order in which it is presented is different. Therefore, compressibility and order play an important role in chunking.
|
https://en.wikipedia.org/wiki/Chunking_(psychology)
|
Ininformation theory, theconditional entropyquantifies the amount of information needed to describe the outcome of arandom variableY{\displaystyle Y}given that the value of another random variableX{\displaystyle X}is known. Here, information is measured inshannons,nats, orhartleys. Theentropy ofY{\displaystyle Y}conditioned onX{\displaystyle X}is written asH(Y|X){\displaystyle \mathrm {H} (Y|X)}.
The conditional entropy ofY{\displaystyle Y}givenX{\displaystyle X}is defined as
whereX{\displaystyle {\mathcal {X}}}andY{\displaystyle {\mathcal {Y}}}denote thesupport setsofX{\displaystyle X}andY{\displaystyle Y}.
Note:Here, the convention is that the expression0log0{\displaystyle 0\log 0}should be treated as being equal to zero. This is becauselimθ→0+θlogθ=0{\displaystyle \lim _{\theta \to 0^{+}}\theta \,\log \theta =0}.[1]
Intuitively, notice that by definition ofexpected valueand ofconditional probability,H(Y|X){\displaystyle \displaystyle H(Y|X)}can be written asH(Y|X)=E[f(X,Y)]{\displaystyle H(Y|X)=\mathbb {E} [f(X,Y)]}, wheref{\displaystyle f}is defined asf(x,y):=−log(p(x,y)p(x))=−log(p(y|x)){\displaystyle \displaystyle f(x,y):=-\log \left({\frac {p(x,y)}{p(x)}}\right)=-\log(p(y|x))}. One can think off{\displaystyle \displaystyle f}as associating each pair(x,y){\displaystyle \displaystyle (x,y)}with a quantity measuring the information content of(Y=y){\displaystyle \displaystyle (Y=y)}given(X=x){\displaystyle \displaystyle (X=x)}. This quantity is directly related to the amount of information needed to describe the event(Y=y){\displaystyle \displaystyle (Y=y)}given(X=x){\displaystyle (X=x)}. Hence by computing the expected value off{\displaystyle \displaystyle f}over all pairs of values(x,y)∈X×Y{\displaystyle (x,y)\in {\mathcal {X}}\times {\mathcal {Y}}}, the conditional entropyH(Y|X){\displaystyle \displaystyle H(Y|X)}measures how much information, on average, the variableX{\displaystyle X}encodes aboutY{\displaystyle Y}.
LetH(Y|X=x){\displaystyle \mathrm {H} (Y|X=x)}be theentropyof the discrete random variableY{\displaystyle Y}conditioned on the discrete random variableX{\displaystyle X}taking a certain valuex{\displaystyle x}. Denote the support sets ofX{\displaystyle X}andY{\displaystyle Y}byX{\displaystyle {\mathcal {X}}}andY{\displaystyle {\mathcal {Y}}}. LetY{\displaystyle Y}haveprobability mass functionpY(y){\displaystyle p_{Y}{(y)}}. The unconditional entropy ofY{\displaystyle Y}is calculated asH(Y):=E[I(Y)]{\displaystyle \mathrm {H} (Y):=\mathbb {E} [\operatorname {I} (Y)]}, i.e.
whereI(yi){\displaystyle \operatorname {I} (y_{i})}is theinformation contentof theoutcomeofY{\displaystyle Y}taking the valueyi{\displaystyle y_{i}}. The entropy ofY{\displaystyle Y}conditioned onX{\displaystyle X}taking the valuex{\displaystyle x}is defined by:
Note thatH(Y|X){\displaystyle \mathrm {H} (Y|X)}is the result of averagingH(Y|X=x){\displaystyle \mathrm {H} (Y|X=x)}over all possible valuesx{\displaystyle x}thatX{\displaystyle X}may take. Also, if the above sum is taken over a sampley1,…,yn{\displaystyle y_{1},\dots ,y_{n}}, the expected valueEX[H(y1,…,yn∣X=x)]{\displaystyle E_{X}[\mathrm {H} (y_{1},\dots ,y_{n}\mid X=x)]}is known in some domains asequivocation.[2]
Givendiscrete random variablesX{\displaystyle X}with imageX{\displaystyle {\mathcal {X}}}andY{\displaystyle Y}with imageY{\displaystyle {\mathcal {Y}}}, the conditional entropy ofY{\displaystyle Y}givenX{\displaystyle X}is defined as the weighted sum ofH(Y|X=x){\displaystyle \mathrm {H} (Y|X=x)}for each possible value ofx{\displaystyle x}, usingp(x){\displaystyle p(x)}as the weights:[3]: 15
Conversely,H(Y|X)=H(Y){\displaystyle \mathrm {H} (Y|X)=\mathrm {H} (Y)}if and only ifY{\displaystyle Y}andX{\displaystyle X}areindependent random variables.
Assume that the combined system determined by two random variablesX{\displaystyle X}andY{\displaystyle Y}hasjoint entropyH(X,Y){\displaystyle \mathrm {H} (X,Y)}, that is, we needH(X,Y){\displaystyle \mathrm {H} (X,Y)}bits of information on average to describe its exact state. Now if we first learn the value ofX{\displaystyle X}, we have gainedH(X){\displaystyle \mathrm {H} (X)}bits of information. OnceX{\displaystyle X}is known, we only needH(X,Y)−H(X){\displaystyle \mathrm {H} (X,Y)-\mathrm {H} (X)}bits to describe the state of the whole system. This quantity is exactlyH(Y|X){\displaystyle \mathrm {H} (Y|X)}, which gives thechain ruleof conditional entropy:
The chain rule follows from the above definition of conditional entropy:
In general, a chain rule for multiple random variables holds:
It has a similar form tochain rulein probability theory, except that addition instead of multiplication is used.
Bayes' rulefor conditional entropy states
Proof.H(Y|X)=H(X,Y)−H(X){\displaystyle \mathrm {H} (Y|X)=\mathrm {H} (X,Y)-\mathrm {H} (X)}andH(X|Y)=H(Y,X)−H(Y){\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (Y,X)-\mathrm {H} (Y)}. Symmetry entailsH(X,Y)=H(Y,X){\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (Y,X)}. Subtracting the two equations implies Bayes' rule.
IfY{\displaystyle Y}isconditionally independentofZ{\displaystyle Z}givenX{\displaystyle X}we have:
For anyX{\displaystyle X}andY{\displaystyle Y}:
whereI(X;Y){\displaystyle \operatorname {I} (X;Y)}is themutual informationbetweenX{\displaystyle X}andY{\displaystyle Y}.
For independentX{\displaystyle X}andY{\displaystyle Y}:
Although the specific-conditional entropyH(X|Y=y){\displaystyle \mathrm {H} (X|Y=y)}can be either less or greater thanH(X){\displaystyle \mathrm {H} (X)}for a givenrandom variatey{\displaystyle y}ofY{\displaystyle Y},H(X|Y){\displaystyle \mathrm {H} (X|Y)}can never exceedH(X){\displaystyle \mathrm {H} (X)}.
The above definition is for discrete random variables. The continuous version of discrete conditional entropy is calledconditional differential (or continuous) entropy. LetX{\displaystyle X}andY{\displaystyle Y}be a continuous random variables with ajoint probability density functionf(x,y){\displaystyle f(x,y)}. The differential conditional entropyh(X|Y){\displaystyle h(X|Y)}is defined as[3]: 249
In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative.
As in the discrete case there is a chain rule for differential entropy:
Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite.
Joint differential entropy is also used in the definition of themutual informationbetween continuous random variables:
The conditional differential entropy yields a lower bound on the expected squared error of anestimator. For any Gaussian random variableX{\displaystyle X}, observationY{\displaystyle Y}and estimatorX^{\displaystyle {\widehat {X}}}the following holds:[3]: 255
This is related to theuncertainty principlefromquantum mechanics.
Inquantum information theory, the conditional entropy is generalized to theconditional quantum entropy. The latter can take negative values, unlike its classical counterpart.
|
https://en.wikipedia.org/wiki/Conditional_entropy
|
Corporate financeis an area offinancethat deals with the sources of funding, and thecapital structureof businesses, the actions that managers take to increase thevalueof the firm to theshareholders, and the tools and analysis used to allocate financial resources. The primary goal of corporate finance is tomaximizeor increaseshareholder value.[1]
Correspondingly, corporate finance comprises two main sub-disciplines.[citation needed]Capital budgetingis concerned with the setting of criteria about which value-addingprojectsshould receive investmentfunding, and whether to finance that investment withequityordebtcapital.Working capitalmanagement is the management of the company's monetary funds that deal with the short-term operating balance ofcurrent assetsandcurrent liabilities; the focus here is on managing cash,inventories, and short-term borrowing and lending (such as the terms on credit extended to customers).
The terms corporate finance andcorporate financierare also associated withinvestment banking. The typical role of an investment bank is to evaluate the company's financial needs and raise the appropriate type of capital that best fits those needs. Thus, the terms "corporate finance" and "corporate financier" may be associated with transactions in which capital is raised in order to create, develop, grow or acquire businesses.[2]
Although it is in principle different frommanagerial financewhich studies the financial management of all firms, rather thancorporationsalone, the main concepts in the study of corporate finance are applicable to the financial problems of all kinds of firms.Financial managementoverlaps with the financial function of theaccounting profession. However,financial accountingis the reporting of historical financial information, while financial management is concerned with the deployment of capital resources to increase a firm's value to the shareholders.
Corporate finance for the pre-industrial world began to emerge in theItalian city-statesand thelow countriesof Europe from the 15th century.
The Dutch East India Company (also known by the abbreviation "VOC" in Dutch) was the firstpublicly listed companyever to pay regulardividends.[3][4][5]The VOC was also the first recordedjoint-stock companyto get a fixedcapital stock. Public markets for investment securities developed in theDutch Republicduring the 17th century.[6][7][8]
By the early 1800s,Londonacted as a center of corporate finance for companies around the world, which innovated new forms of lending and investment; seeCity of London § Economy.
The twentieth century brought the rise ofmanagerial capitalismand common stock finance, withshare capitalraised throughlistings, in preference to othersources of capital.
Modern corporate finance, alongsideinvestment management, developed in the second half of the 20th century, particularly driven by innovations in theory and practice in theUnited Statesand Britain.[9][10][11][12][13][14]Here, see the later sections ofHistory of banking in the United Statesand ofHistory of private equity and venture capital.
The primary goal of financial management[15]is to maximize or to continually increase shareholder value (seeFisher separation theorem).[a]Here, the three main questions that corporate finance addresses are:what long-term investments should we make?What methods should we employ to finance the investment?How do we manage our day-to-day financial activities?These three questions lead to the primary areas of concern in corporate finance: capital budgeting, capital structure, and working capital management.[19][20]This then requires that managers find an appropriate balance between: investments in"projects"that increase the firm's long term profitability; and paying excess cash in the form of dividends to shareholders; short term considerations, such as paying back creditor-related debt, will also feature.[15][21]
Choosing between investment projects will thus be based upon several inter-related criteria.[1](1) Corporate management seeks to maximize the value of the firm by investing in projects which yield a positive net present value when valued using an appropriate discount rate -"hurdle rate"- in consideration of risk. (2) These projects must also be financed appropriately. (3) If no growth is possible by the company and excess cash surplus is not needed to the firm, then financial theory suggests that management should return some or all of the excess cash to shareholders (i.e., distribution via dividends).[22]
The first two criteria concern "capital budgeting", the planning of value-adding, long-term corporate financial projects relating to investments funded through and affecting the firm'scapital structure, and where management must allocate the firm's limited resources between competing opportunities ("projects").[23]Capital budgeting is thus also concerned with the setting of criteria about which projects should receive investment funding to increase the value of the firm, and whether to finance that investment with equity or debt capital.[24]Investments should be made on the basis of value-added to the future of the corporation. Projects that increase a firm's value may include a wide variety of different types of investments, including but not limited to, expansion policies, ormergers and acquisitions.
The third criterion relates todividend policy.
In general, managers ofgrowth companies(i.e. firms that earn high rates of return on invested capital) will use most of the firm's capital resources and surplus cash on investments and projects so the company can continue to expand its business operations into the future. Whencompanies reach maturity levelswithin their industry (i.e. companies that earn approximately average or lower returns on invested capital), managers of these companies will use surplus cash to payout dividends to shareholders.
Thus, when no growth or expansion is likely, and excess cash surplus exists and is not needed, then management is expected to pay out some or all of those surplus earnings in the form of cash dividends or to repurchase the company's stock through a share buyback program.[25][26]
Achieving the goals of corporate finance requires that any corporate investment be financed appropriately.[27]The sources of financing are, generically,capital self-generated by the firmand capital from external funders, obtained by issuing newdebtandequity(andhybrid-orconvertible securities). However, as above, since both hurdle rate and cash flows (and hence the riskiness of the firm) will be affected, the financing mix will impact the valuation of the firm, and a considered decision[28]is required here.
SeeBalance sheet,WACC.
Finally, there is much theoretical discussion as to other considerations that management might weigh here.
Corporations, as outlined, may rely on borrowed funds (debt capital orcredit) as sources of investment to sustain ongoing business operations or to fund future growth. Debt comes in several forms, such as through bank loans, notes payable, orbondsissued to the public. Bonds require the corporation to make regularinterestpayments (interest expenses) on the borrowed capital until the debt reaches its maturity date, therein the firm must pay back the obligation in full. (An exception iszero-coupon bonds- or "zeros"). Debt payments can also be made in the form of asinking fundprovision, whereby the corporation pays annual installments of the borrowed debt above regular interest charges. Corporations that issuecallable bondsare entitled to pay back the obligation in full whenever the company feels it is in their best interest to pay off the debt payments. If interest expenses cannot be made by the corporation through cash payments, the firm may also usecollateralassets as a form of repaying their debt obligations (or through the process ofliquidation).
Especially re debt funded corporations, seeBankruptcyandFinancial distress.
Under some treatments (especially for valuation)leasesare regarded as debt: the payments are set; they are tax deductible; failing to make them results in the loss of the asset.[29]
Corporations can alternatively sellshares of the companyto investors to raise capital. Investors, orshareholders, expect that there will be an upward trend in value of the company (or appreciate in value) over time to make their investment a profitable purchase. As outlined:
Shareholder value is increased when corporations invest equity capital and other funds into projects (or investments) that earn a positive rate of return for the owners. Investors then prefer to buy shares of stock in companies that will consistently earn a positive rate ofreturn on capital(on equity) in the future, thus increasing the market value of the stock of that corporation.
Shareholder value may also be increased when corporations payout excess cash surplus (funds that are not needed for business) in the form ofdividends.Internal financing, often, is constituted ofretained earnings, i.e. those remaining after dividends; this provides,per some measures, the cheapest form of funding.
Preferred stockis a specialized form of financing which combines properties of common stock and debt instruments, and may then be considered ahybrid security. Preferreds are senior (i.e. higher ranking) tocommon stock, but subordinate tobondsin terms of claim (or rights to their share of the assets of the company).[30]Preferred stock usually carries novoting rights,[31]but may carry adividendand may have priority over common stock in the payment of dividends and uponliquidation. Terms of the preferred stock are stated in a "Certificate of Designation".
Similar to bonds, preferred stocks are rated by the major credit-rating companies. The rating for preferreds is generally lower, since preferred dividends do not carry the same guarantees as interest payments from bonds and they are junior to all creditors.[32]Preferred stock is then a special class of shares which may have any combination of features not possessed by common stock.
The following features are usually associated with preferred stock:[33]
As outlined, the financing "mix" will impact the valuation (as well as the cashflows) of the firm, and must therefore be structured appropriately:
there are then two interrelated considerations[28]here:
The above, are the primary objectives in deciding on the firm's capitalization structure. Parallel considerations, also, will factor into management's thinking.
The starting point for discussion here is theModigliani–Miller theorem.
This states, through two connected Propositions, that in a "perfect market" how a firm is financed is irrelevant to its value:
(i) the value of a company is independent of its capital structure; (ii) the cost of equity will be the same for a leveraged firm and an unleveraged firm.
"Modigliani and Miller", however,is generally viewedas a theoretical result, and in practice, management will here too focus on enhacing firm value and / or reducing the cost of funding.
Re value, much of the discussion falls under the umbrella of theTrade-Off Theoryin which firms are assumed to trade-off thetax benefits of debtwith thebankruptcy costs of debtwhen choosing how to allocate the company's resources, finding an optimum re firm value.
Thecapital structure substitution theoryhypothesizes that management manipulates the capital structure such thatearnings per share(EPS) are maximized.
Re cost of funds, thePecking Order Theory(Stewart Myers) suggests that firms avoidexternal financingwhile they haveinternal financingavailable and avoid new equity financing while they can engage in new debt financing at reasonably lowinterest rates.
One of the more recent innovations in this area from a theoretical point of view is themarket timing hypothesis. This hypothesis, inspired by thebehavioral financeliterature, states that firms look for the cheaper type of financing regardless of their current levels of internal resources, debt and equity.
The process of allocating financial resources to majorinvestment- orcapital expenditureis known ascapital budgeting.[38][23]Consistent with the overall goal of increasingfirm value, the decisioning here focuses on whether the investment in question is worthy of funding through the firm's capitalization structures (debt, equity or retained earnings as above).
To be considered acceptable, the investment must bevalue additivere: (i) improvedoperating profitandcash flows; as combined with (ii) anynewfunding commitments and capital implications.
Re the latter: if the investment is large in the context of the firm as a whole, so the discount rate applied by outside investors to the (private) firm's equity may be adjusted upwards to reflect the new level of risk,[39]thus impacting future financing activities andoverallvaluation.
More sophisticated treatments will thus produce accompanyingsensitivity- andrisk metrics, and will incorporate anyinherent contingencies.
The focus of capital budgeting is on major "projects" - ofteninvestments in other firms, or expansion into new marketsor geographies- but may extend also tonew plants, new / replacement machinery,new products, andresearch and developmentprograms;
day to dayoperational expenditureis the realm offinancial managementasbelow.
DCF valuation formula, where thevalue of the firmor project is the sum of its forecastedfree cash flowsdiscounted to the present using theweighted average cost of capital, i.e.cost of equityandcost of debt, with the former (often) derived using the CAPM. The final part is theterminal value, aggregating all cash flows beyond theexplicitforecast period, for anappropriate long-term growthin earnings.
In general,[40]each "project's" value will be estimated using adiscounted cash flow(DCF) valuation, and the opportunity with the highest value, as measured by the resultantnet present value(NPV) will be selected (first applied in a corporate finance setting byJoel Deanin 1951). This requires estimating the size and timing of all of theincrementalcash flowsresulting from the project. Such future cash flows are thendiscountedto determine theirpresent value(seeTime value of money). These present values are then summed, and this sum net of the initial investment outlay is theNPV. SeeFinancial modeling § Accountingfor general discussion, andValuation using discounted cash flowsfor the mechanics, with discussion re modifications for corporate finance.
The NPV is greatly affected by thediscount rate. Thus, identifying the proper discount rate – often termed, the project "hurdle rate"[41]– is critical to choosing appropriate projects and investments for the firm. The hurdle rate is the minimum acceptablereturnon an investment – i.e., theproject appropriate discount rate. The hurdle rate should reflect the riskiness of the investment, typically measured byvolatilityof cash flows, and must take into account the project-relevant financing mix.[42]Managers use models such as theCAPMor theAPTto estimate a discount rate appropriate for a particular project, and use theweighted average cost of capital(WACC) to reflect the financing mix selected. (A common error in choosing a discount rate for a project is to apply a WACC that applies to the entire firm. Such an approach may not be appropriate where the risk of a particular project differs markedly from that of the firm's existing portfolio of assets.)
In conjunction with NPV, there are several other measures used as (secondary)selection criteriain corporate finance; seeCapital budgeting § Ranked projects. These are visible from the DCF and includediscounted payback period,IRR,Modified IRR,equivalent annuity,capital efficiency, andROI.
Alternatives (complements) to the standard DCF, modeleconomic profitas opposed tofree cash flow; these includeresidual income valuation,MVA/EVA(Joel Stern,Stern Stewart & Co) andAPV(Stewart Myers). With the cost of capital correctly and correspondingly adjusted, these valuations should yield the same result as the DCF. These may, however, be considered more appropriate for projects with negative free cash flow several years out, but which are expected to generate positive cash flow thereafter (and may also be less sensitive to terminal value).
Given theuncertaintyinherent in project forecasting and valuation,[43][44][45]analysts will wish to assess thesensitivityof project NPV to the various inputs (i.e. assumptions) to the DCFmodel. In a typicalsensitivity analysisthe analyst will vary one key factor while holding all other inputs constant,ceteris paribus. The sensitivity of NPV to a change in that factor is then observed, and is calculated as a "slope": ΔNPV / Δfactor. For example, the analyst will determine NPV at variousgrowth ratesinannual revenueas specified (usually at set increments, e.g. -10%, -5%, 0%, 5%...), and then determine the sensitivity using this formula. Often, several variables may be of interest, and their various combinations produce a "value-surface"[46](or even a "value-space"), where NPV is then afunction of several variables. See alsoStress testing.
Using a related technique, analysts also runscenario basedforecasts of NPV. Here, a scenario comprises a particular outcome for economy-wide, "global" factors (demand for the product,exchange rates,commodity prices, etc.)as well asfor company-specific factors (unit costs, etc.). As an example, the analyst may specify various revenue growth scenarios (e.g. -5% for "Worst Case", +5% for "Likely Case" and +15% for "Best Case"), where all key inputs are adjusted so as to be consistent with the growth assumptions, and calculate the NPV for each. Note that for scenario based analysis, the various combinations of inputs must beinternally consistent(seediscussionatFinancial modeling), whereas for the sensitivity approach these need not be so. An application of this methodology is to determine an "unbiased" NPV, where management determines a (subjective) probability for each scenario – the NPV for the project is then theprobability-weighted averageof the various scenarios; seeFirst Chicago Method. (See alsorNPV, where cash flows, as opposed to scenarios, are probability-weighted.)
A further advancement which "overcomes the limitations of sensitivity and scenario analyses by examining the effects of all possible combinations of variables and their realizations"[47]is to constructstochastic[48]orprobabilisticfinancial models – as opposed to the traditional static anddeterministicmodels as above.[44]For this purpose, the most common method is to useMonte Carlo simulationto analyze the project's NPV. This method was introduced to finance byDavid B. Hertzin 1964, although it has only recently become common: today analysts are even able to run simulations inspreadsheetbased DCF models, typically using a risk-analysisadd-in, such as@RiskorCrystal Ball. Here, the cash flow components that are (heavily) impacted by uncertainty are simulated, mathematically reflecting their "random characteristics". In contrast to the scenario approach above, the simulation produces severalthousandrandombut possible outcomes, or trials, "covering all conceivable real world contingencies in proportion to their likelihood;"[49]seeMonte Carlo Simulation versus "What If" Scenarios. The output is then ahistogramof project NPV, and the average NPV of the potential investment – as well as itsvolatilityand other sensitivities – is then observed. This histogram provides information not visible from the static DCF: for example, it allows for an estimate of the probability that a project has a net present value greater than zero (or any other value).
Continuing the above example: instead of assigning three discrete values to revenue growth, and to the other relevant variables, the analyst would assign an appropriateprobability distributionto each variable (commonlytriangularorbeta), and, where possible, specify the observed or supposedcorrelationbetween the variables. These distributions would then be "sampled" repeatedly –incorporating this correlation– so as to generate several thousand random but possible scenarios, with corresponding valuations, which are then used to generate the NPV histogram. The resultant statistics (averageNPV andstandard deviationof NPV) will be a more accurate mirror of the project's "randomness" than the variance observed under the scenario based approach. (These are often used as estimates of theunderlying"spot price" and volatility for the real option valuation below; seeReal options valuation § Valuation inputs.) A more robust Monte Carlo model would include the possible occurrence of risk events - e.g., acredit crunch- that drive variations in one or more of the DCF model inputs.
Often - for exampleR&Dprojects - a project may open (or close) various paths of action to the company, but this reality will not (typically) be captured in a strict NPV approach.[50]Some analysts account for this uncertainty by[43]adjusting the discount rate (e.g. by increasing thecost of capital) or the cash flows (usingcertainty equivalents, or applying (subjective) "haircuts" to the forecast numbers; seePenalized present value).[51][52]Even when employed, however, these latter methods do not normally properly account for changes in risk over the project's lifecycle and hence fail to appropriately adapt the risk adjustment.[53][54]Management will therefore (sometimes) employ tools which place an explicit value on these options. So, whereas in a DCF valuation themost likelyor average orscenario specificcash flows are discounted, here the "flexible and staged nature" of the investment ismodelled, and hence "all" potentialpayoffsare considered. SeefurtherunderReal options valuation. The difference between the two valuations is the "value of flexibility" inherent in the project.
The two most common tools areDecision Tree Analysis(DTA)[43]andreal options valuation(ROV);[55]they may often be used interchangeably:
Dividend policy is concerned with financial policies regarding the payment of a cash dividend in the present, orretaining earningsand then paying an increased dividend at a later stage.
The policy will be set based upon the type of company and what management determines is the best use of those dividend resources for the firm and its shareholders.
Practical and theoretical considerations - interacting with the above funding and investment decisioning, and re overall firm value - will inform this thinking.[56][57]
In general, whether[58]to issue dividends,[56]and what amount, is determined on the basis of the company's unappropriatedprofit(excess cash) and influenced by the company's long-term earning power. In all instances, as above, the appropriate dividend policy is in parallel directed by that which maximizes long-term shareholder value.
When cash surplus exists and is not needed by the firm, then management is expected to pay out some or all of those surplus earnings in the form of cash dividends or to repurchase the company's stock through a share buyback program.
Thus, if there are no NPV positive opportunities, i.e. projects wherereturnsexceed the hurdle rate, and excess cash surplus is not needed, then management should return (some or all of) the excess cash to shareholders as dividends.
This is the general case, however the"style" of the stockmay also impact the decision. Shareholders of a "growth stock", for example, expect that the company will retain (most of) the excess cash surplus so as to fund future projects internally to help increase the value of the firm. Shareholders ofvalue-or secondary stocks, on the other hand, would prefer management to pay surplus earnings in the form of cash dividends, especially when a positive return cannot be earned through the reinvestment of undistributed earnings; ashare buybackprogram may be accepted when the value of the stock is greater than the returns to be realized from the reinvestment of undistributed profits.
Management will also choose theformof the dividend distribution, as stated, generally as cashdividendsor via ashare buyback. Various factors may be taken into consideration: where shareholders must paytax on dividends, firms may elect to retain earnings or to perform a stock buyback, in both cases increasing the value of shares outstanding. Alternatively, some companies will pay "dividends" fromstockrather than in cash or via ashare buybackas mentioned; seeCorporate action.
As forcapital structure above, there are severalschools of thoughton dividends, in particular re their impact on firm value.[56]A key consideration will be whether there are any tax disadvantages associated with dividends: i.e. dividends attract a higher tax rate as compared, e.g., tocapital gains; seedividend taxandRetained earnings § Tax implications.
Here, per the abovementionedModigliani–Miller theorem:
if there are no such disadvantages - and companies can raise equity finance cheaply, i.e. canissue stockat low cost - then dividend policy is value neutral;
if dividends suffer a tax disadvantage, then increasing dividends should reduce firm value.
Regardless, but particularly in the second (more realistic) case, other considerations apply.
The first set of these, relates to investor preferences and behavior (seeClientele effect).
Investors are seen to prefer a “bird in the hand” - i.e. cash dividends are certain as compared to income from future capital gains - and in fact, commonly employ some form ofdividend valuation modelin valuing shares.
Relatedly, investors will then prefer astableor "smooth" dividend payout - as far as is reasonable given earnings prospectsand sustainability- which will then positively impact share price; seeLintner model.
Cash dividends may also allow management to convey(insider)information about corporate performance; and increasing a company's dividend payout may then predict (or lead to) favorable performance of the company's stock in the future; seeDividend signaling hypothesis
The second set relates to management's thinking re capital structure and earnings, overlappingthe above.
Under a"Residual dividend policy"- i.e. as contrasted with a "smoothed" payout policy - the firm will use retained profits to finance capital investments if cheaper than the same via equity financing; see againPecking order theory.
Similarly, under theWalter model, dividends are paid only if capital retained will earn a higher return than that available to investors (proxied:ROE>Ke).
Management may also want to "manipulate" the capital structure - in this context, by paying or not paying dividends - such thatearnings per shareare maximized; see again,Capital structure substitution theory.
Managing the corporation'sworking capitalposition so as to sustain ongoing business operations is referred to asworking capital management.[59][60]This entails, essentially, managing the relationship between a firm'sshort-term assetsand itsshort-term liabilities, conscious of various considerations.
Here, as above, the goal of Corporate Finance is the maximization of firm value. In the context of long term, capital budgeting, firm value is enhanced through appropriately selecting and funding NPV positive investments. These investments, in turn, have implications in terms of cash flow andcost of capital.
The goal of Working Capital (i.e. short term) management is therefore to ensure that the firm is able tooperate, and that it has sufficient cash flow to service long-term debt, and to satisfy both maturingshort-term debtand upcoming operational expenses. In so doing, firm value is enhanced when, and if, thereturn on capitalexceeds the cost of capital; SeeEconomic value added(EVA). Managing short term finance along with long term finance is therefore one task of a modern CFO.
Working capital is the amount of funds that are necessary for an organization to continue its ongoing business operations, until the firm is reimbursed through payments for the goods or services it has delivered to its customers.[61]Working capital is measured through the difference between resources in cash or readily convertible into cash (Current Assets), and cash requirements (Current Liabilities). As a result, capital resource allocations relating to working capital are always current, i.e. short-term.
In addition totime horizon, working capital management differs from capital budgeting in terms ofdiscountingand profitability considerations; decisions here are also "reversible" to a much larger extent. (Considerations as torisk appetiteand return targets remain identical, although some constraints – such as those imposed byloan covenants– may be more relevant here).
The (short term) goals of working capital are therefore not approached on the same basis as (long term) profitability, and working capital management applies different criteria in allocating resources: the main considerations are (1) cash flow / liquidity and (2) profitability / return on capital (of which cash flow is probably the most important).
Guided by the above criteria, management will use a combination of policies and techniques for the management of working capital.[62]These policies, as outlined, aim at managing thecurrent assets(generallycashandcash equivalents,inventoriesanddebtors) and the short term financing, such that cash flows and returns are acceptable.[60]
As discussed, corporate finance comprises the activities, analytical methods, and techniques that deal with the company's long-term investments, finances and capital.
Re the latter,when capital must be raisedfor the corporation or shareholders, the "corporate finance team" will engage[64]itsinvestment bank.
The bankwill then facilitatethe requiredshare listing(IPOorSEO) orbond issuance, as appropriate giventhe above anaysis.
Thereafter the bankwill work closelywith the corporatere servicingthe new securities, and managing its presence in thecapital marketsmore generally
(offering advisory, financial advisory, deal advisory, and / or transaction advisory[65]services).
Use of the term "corporate finance", correspondingly, varies considerably across the world.
In theUnited States, "Corporate Finance" corresponds to the first usage.
Aprofessional heremay be referred to as a "corporate finance analyst" and will typically be based in theFP&Aarea, reporting to theCFO.[64][66]SeeFinancial analyst § Financial planning and analysis.
In theUnited KingdomandCommonwealthcountries,[65]on the other hand, "corporate finance" and "corporate financier" are associated withinvestment banking.
Financial risk management,[48][67]generally, is focused on measuring and managingmarket risk,credit riskandoperational risk.
Within corporates[67](i.e. as opposed to banks), the scope extends to preserving (and enhancing) the firm'seconomic value.[68]It will then overlap both corporate finance andenterprise risk management: addressing risks to the firm's overallstrategic objectives,
by focusing on the financial exposures and opportunities arising from business decisions, and their link to the firm’sappetite for risk, as well as their impact onshare price.
(In large firms, Risk Management typically exists as anindependent function, with theCROconsulted on capital-investment and other strategic decisions.)
Re corporate finance, both operational and funding issues are addressed; respectively:
Broadly,corporate governanceconsiders the mechanisms, processes, practices, and relations by which corporations are controlled and operated by theirboard of directors, managers,shareholders, and other stakeholders.
In the context of corporate finance,[71]a more specific concern will be that executives do not "serve their own vested interests" to the detriment of capital providers.[72]There are several considerations:
In general, here, debt may be seen as "an internal means of controlling management", which has to work hard to ensure that repayments are met,[74]balancing these interests, and also limiting the possibility of overpaying on investments.
GrantingExecutive stock options,[75]alternatively or in parallel, is seen as a mechanism to align management with stockholder interests.
A more formal treatment is offered underagency theory,[76]where these problems and approaches can be seen, and hence analysed, asreal options;[77]seePrincipal–agent problem § Options frameworkfor discussion.
|
https://en.wikipedia.org/wiki/Corporate_finance#Valuing_flexibility
|
Quantum cognitionuses the mathematical formalism of quantum probability theory to model psychology phenomena when classicalprobability theoryfails.[1]The field focuses on modeling phenomena incognitive sciencethat have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory),[2]and modeling preferences indecision theorythat seem paradoxical from a traditional rational point of view (e.g., preference reversals).[3]Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.[4][5]
Quantum cognition can be applied to model cognitive phenomena such asinformation processing[6]by thehuman brain,language,decision making,[7]human memory,conceptsand conceptual reasoning, humanjudgment, andperception.[8][9][10]
Classical probability theory is a rational approach to inference which does not easily explain some observations of human inference in psychology.
Some cases where quantum probability theory has advantages include theconjunction fallacy, thedisjunction fallacy, the failures of thesure-thing principle, andquestion-order biasin judgement.[1]: 752
If participants in a psychology experiment are told about "Linda", described as looking like a feminist but not like a bank teller, then asked to rank the probability,P{\displaystyle P}that Linda is feminist, a bank teller or a feminist and a bank teller, they respond with values that indicate:P(feminist)>P(feminist&bank teller)>P(bank teller){\displaystyle P({\text{feminist}})>P({\text{feminist}}\ \&\ {\text{bank teller}})>P({\text{bank teller}})}Rational classical probability theory makes the incorrect prediction: it expects humans to rank the conjunction less probable than the bank teller option. Many variations of this experiment demonstrate that the fallacy represents human cognition in this case and not an artifact of one presentation.[1]: 753
Quantum cognition models this probability-estimation scenario with quantum probability theory which always ranks sequential probability,P(feminist&bank teller){\displaystyle P({\text{feminist}}\ \&\ {\text{bank teller}})}, greater than the direct probability,P(bank teller){\displaystyle P({\text{bank teller}})}. The idea is that a person's understanding of "bank teller" is affected by the context of the question involving "feminist".[1]: 753The two questions are "incompatible": to treat them with classical theory would require separate reasoning steps.[11]
The quantum cognition concept is based on the observation that various cognitive phenomena are more adequately described by quantum probability theory than by the classical probability theory (see examples below). Thus, the quantum formalism is considered an operational formalism that describes non-classical processing of probabilistic data.
Here, contextuality is the key word (see the monograph of Khrennikov for detailed representation of this viewpoint).[8]Quantum mechanics is fundamentally contextual.[12]Quantum systems do not have objective properties which can be defined independently of measurement context. As has been pointed out byNiels Bohr, the whole experimental arrangement must be taken into account. Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability, and constructive or destructive interference effects. Thus, the quantum cognition approach can be considered an attempt to formalize contextuality of mental processes, by using the mathematical apparatus of quantum mechanics.
Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results:
Given these two separate choices, according to thesure thingprinciple of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round.[13]But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round.[14]This finding violates the law of total probability, yet it can be explained as aquantum interferenceeffect in a manner similar to the explanation for the results fromdouble-slit experimentin quantum physics.[9][15][16]Similar violations of the sure-thing principle are seen in empirical studies of thePrisoner's Dilemmaand have likewise been modeled in terms of quantum interference.[17]
The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, theAllais,Ellsbergand Machina paradoxes.[18][19][20]These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.[21][22][23][16]
Considering automated decision making, quantumdecision treeshave different structure compared to classical decision trees. Data can be analyzed to see if a quantumdecision tree modelfits the data better.[24]
Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors.[25]A conjunction error occurs when a person judges the probability of a likely event Landan unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event Loran unlikely event U. Quantum probability theory is a generalization ofBayesian probabilitytheory because it is based on a set ofvon Neumannaxioms that relax some of the classicKolmogorovaxioms.[26]The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.[27][28][29]
The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-calledliar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.[30][31]
Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding.Cognitive psychologyhas researched different approaches forunderstanding conceptsincluding exemplars, prototypes, andneural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect,[32]and the overextension and underextension of typicality and membership weight for conjunction and disjunction.[33][34]By and large, quantum cognition has drawn on quantum theory in three ways to model concepts.
The large amount of data collected by Hampton[33][34]on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence.[27][40][41][42]And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.[43][44]
The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum ofnatural language processing(NLP) andinformation retrieval(IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) K. Van Rijsbergen introduced a quantum structure approach to IR,[45](b) Widdows and Peters utilised a quantum logical negation for a concrete search system,[38][46]and Aerts and Czachor identified quantum structure in semantic space theories, such aslatent semantic analysis.[47]Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.[48]
Ideas for applying the formalisms of quantum theory to cognition first appeared in the 1990s byDiederik Aertsand his collaborators Jan Broekaert,Sonja Smetsand Liane Gabora, by Harald Atmanspacher, Robert Bordley, andAndrei Khrennikov. A special issue onQuantum Cognition and Decisionappeared in theJournal of Mathematical Psychology(2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held atStanfordin 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007AAAISpring Symposium Series. This was followed by workshops atOxfordin 2008,Saarbrückenin 2009, at the 2010 AAAI Fall Symposium Series held inWashington, D.C., 2011 inAberdeen, 2012 inParis, and 2013 inLeicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of theCognitive Science Society. ASpecial Issue on Quantum models of Cognitionappeared in 2013 in the journalTopics in Cognitive Science.
|
https://en.wikipedia.org/wiki/Quantum_cognition
|
Incomputer programming, theexclusive or swap(sometimes shortened toXOR swap) is analgorithmthat uses theexclusive orbitwise operationtoswapthe values of twovariableswithout using the temporary variable which is normally required.
The algorithm is primarily a novelty and a way of demonstrating properties of theexclusive oroperation. It is sometimes discussed as aprogram optimization, but there are almost no cases where swapping viaexclusive orprovides benefit over the standard, obvious technique.
Conventional swapping requires the use of a temporary storage variable. Using the XOR swap algorithm, however, no temporary storage is needed. The algorithm is as follows:[1][2]
Since XOR is acommutative operation, either X XOR Y or Y XOR X can be used interchangeably in any of the foregoing three lines. Note that on some architectures the first operand of the XOR instruction specifies the target location at which the result of the operation is stored, preventing this interchangeability. The algorithm typically corresponds to threemachine-codeinstructions, represented by corresponding pseudocode and assembly instructions in the three rows of the following table:
In the above System/370 assembly code sample, R1 and R2 are distinctregisters, and eachXRoperation leaves its result in the register named in the first argument. Using x86 assembly, values X and Y are in registers eax and ebx (respectively), andxorplaces the result of the operation in the first register. In RISC-V assembly, value X and Y are in registers X10 and X11, andxorplaces the result of the operation in the first register (same as X86)
However, in the pseudocode or high-level language version or implementation, the algorithm fails ifxandyuse the same storage location, since the value stored in that location will be zeroed out by the first XOR instruction, and then remain zero; it will not be "swapped with itself". This isnotthe same as ifxandyhave the same values. The trouble only comes whenxandyuse the same storage location, in which case their values must already be equal. That is, ifxandyuse the same storage location, then the line:
setsxto zero (becausex=yso X XOR Y is zero)andsetsyto zero (since it uses the same storage location), causingxandyto lose their original values.
Thebinary operationXOR over bit strings of lengthN{\displaystyle N}exhibits the following properties (where⊕{\displaystyle \oplus }denotes XOR):[a]
Suppose that we have two distinct registersR1andR2as in the table below, with initial valuesAandBrespectively. We perform the operations below in sequence, and reduce our results using the properties listed above.
As XOR can be interpreted as binary addition and a pair of bits can be interpreted as a vector in a two-dimensionalvector spaceover thefield with two elements, the steps in the algorithm can be interpreted as multiplication by 2×2 matrices over the field with two elements. For simplicity, assume initially thatxandyare each single bits, not bit vectors.
For example, the step:
which also has the implicit:
corresponds to the matrix(1101){\displaystyle \left({\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right)}as
The sequence of operations is then expressed as:
(working with binary values, so1+1=0{\displaystyle 1+1=0}), which expresses theelementary matrixof switching two rows (or columns) in terms of thetransvections(shears) of adding one element to the other.
To generalize to where X and Y are not single bits, but instead bit vectors of lengthn, these 2×2 matrices are replaced by 2n×2nblock matricessuch as(InIn0In).{\displaystyle \left({\begin{smallmatrix}I_{n}&I_{n}\\0&I_{n}\end{smallmatrix}}\right).}
These matrices are operating onvalues,not onvariables(with storage locations), hence this interpretation abstracts away from issues of storage location and the problem of both variables sharing the same storage location.
ACfunction that implements the XOR swap algorithm:
The code first checks if the addresses are distinct and uses aguard clauseto exit the function early if they are equal. Without that check, if they were equal, the algorithm would fold to a triple*x ^= *xresulting in zero.
The XOR swap algorithm can also be defined with a macro:
On modernCPU architectures, the XOR technique can be slower than using a temporary variable to do swapping. At least on recent x86 CPUs, both by AMD and Intel, moving between registers regularly incurs zero latency. (This is called MOV-elimination.) Even if there is not any architectural register available to use, theXCHGinstruction will be at least as fast as the three XORs taken together. Another reason is that modern CPUs strive to execute instructions in parallel viainstruction pipelines. In the XOR technique, the inputs to each operation depend on the results of the previous operation, so they must be executed in strictly sequential order, negating any benefits ofinstruction-level parallelism.[3]
The XOR swap is also complicated in practice byaliasing. If an attempt is made to XOR-swap the contents of some location with itself, the result is that the location is zeroed out and its value lost. Therefore, XOR swapping must not be used blindly in a high-level language if aliasing is possible. This issue does not apply if the technique is used in assembly to swap the contents of two registers.
Similar problems occur withcall by name, as inJensen's Device, where swappingiandA[i]via a temporary variable yields incorrect results due to the arguments being related: swapping viatemp = i; i = A[i]; A[i] = tempchanges the value foriin the second statement, which then results in the incorrectivalue forA[i]in the third statement.
The underlying principle of the XOR swap algorithm can be applied to any operation meeting criteria L1 through L4 above. Replacing XOR by addition and subtraction gives various slightly different, but largely equivalent, formulations. For example:[4]
Unlike the XOR swap, this variation requires that the underlying processor or programming language uses a method such asmodular arithmeticorbignumsto guarantee that the computation ofX + Ycannot cause an error due tointeger overflow. Therefore, it is seen even more rarely in practice than the XOR swap.
However, the implementation ofAddSwapabove in the C programming language always works even in case of integer overflow, since, according to the C standard, addition and subtraction of unsigned integers follow the rules ofmodular arithmetic, i. e. are done in thecyclic groupZ/2sZ{\displaystyle \mathbb {Z} /2^{s}\mathbb {Z} }wheres{\displaystyle s}is the number of bits ofunsigned int. Indeed, the correctness of the algorithm follows from the fact that the formulas(x+y)−y=x{\displaystyle (x+y)-y=x}and(x+y)−((x+y)−y)=y{\displaystyle (x+y)-((x+y)-y)=y}hold in anyabelian group. This generalizes the proof for the XOR swap algorithm: XOR is both the addition and subtraction in the abelian group(Z/2Z)s{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{s}}(which is thedirect sumofscopies ofZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} }).
This doesn't hold when dealing with thesigned inttype (the default forint). Signed integer overflow is an undefined behavior in C and thus modular arithmetic is not guaranteed by the standard, which may lead to incorrect results.
The sequence of operations inAddSwapcan be expressed via matrix multiplication as:
On architectures lacking a dedicated swap instruction, because it avoids the extra temporary register, the XOR swap algorithm is required for optimalregister allocation. This is particularly important forcompilersusingstatic single assignment formfor register allocation; these compilers occasionally produce programs that need to swap two registers when no registers are free. The XOR swap algorithm avoids the need to reserve an extra register or to spill any registers to main memory.[5]The addition/subtraction variant can also be used for the same purpose.[6]
This method of register allocation is particularly relevant toGPUshader compilers. On modern GPU architectures, spilling variables is expensive due to limited memory bandwidth and high memory latency, while limiting register usage can improve performance due to dynamic partitioning of theregister file. The XOR swap algorithm is therefore required by some GPU compilers.[7]
|
https://en.wikipedia.org/wiki/XOR_swap_algorithm
|
Inmathematics,anticommutativityis a specific property of some non-commutativemathematicaloperations. Swapping the position oftwo argumentsof an antisymmetric operation yields a result which is theinverseof the result with unswapped arguments. The notioninverserefers to agroup structureon the operation'scodomain, possibly with another operation.Subtractionis an anticommutative operation because commuting the operands ofa−bgivesb−a= −(a−b);for example,2 − 10 = −(10 − 2) = −8.Another prominent example of an anticommutative operation is theLie bracket.
Inmathematical physics, wheresymmetryis of central importance, or even just inmultilinear algebrathese operations are mostly (multilinear with respect to somevector structuresand then) calledantisymmetric operations, and when they are not already ofaritygreater than two, extended in anassociativesetting to cover more than twoarguments.
IfA,B{\displaystyle A,B}are twoabelian groups, abilinear mapf:A2→B{\displaystyle f\colon A^{2}\to B}isanticommutativeif for allx,y∈A{\displaystyle x,y\in A}we have
More generally, amultilinear mapg:An→B{\displaystyle g:A^{n}\to B}is anticommutative if for allx1,…xn∈A{\displaystyle x_{1},\dots x_{n}\in A}we have
wheresgn(σ){\displaystyle {\text{sgn}}(\sigma )}is thesignof thepermutationσ{\displaystyle \sigma }.
If the abelian groupB{\displaystyle B}has no 2-torsion, implying that ifx=−x{\displaystyle x=-x}thenx=0{\displaystyle x=0}, then any anticommutative bilinear mapf:A2→B{\displaystyle f\colon A^{2}\to B}satisfies
More generally, bytransposingtwo elements, any anticommutative multilinear mapg:An→B{\displaystyle g\colon A^{n}\to B}satisfies
if any of thexi{\displaystyle x_{i}}are equal; such a map is said to bealternating. Conversely, using multilinearity, any alternating map is anticommutative. In the binary case this works as follows: iff:A2→B{\displaystyle f\colon A^{2}\to B}is alternating then by bilinearity we have
and the proof in the multilinear case is the same but in only two of the inputs.
Examples of anticommutative binary operations include:
|
https://en.wikipedia.org/wiki/Anticommutativity
|
Gradient boostingis amachine learningtechnique based onboostingin a functional space, where the target ispseudo-residualsinstead ofresidualsas in traditional boosting. It gives a prediction model in the form of anensembleof weak prediction models, i.e., models that make very few assumptions about the data, which are typically simpledecision trees.[1][2]When a decision tree is the weak learner, the resulting algorithm is called gradient-boosted trees; it usually outperformsrandom forest.[1]As with otherboostingmethods, a gradient-boosted trees model is built in stages, but it generalizes the other methods by allowing optimization of an arbitrarydifferentiableloss function.
The idea of gradient boosting originated in the observation byLeo Breimanthat boosting can be interpreted as an optimization algorithm on a suitable cost function.[3]Explicit regression gradient boosting algorithms were subsequently developed, byJerome H. Friedman,[4][2](in 1999 and later in 2001) simultaneously with the more general functional gradient boosting perspective of Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean.[5][6]The latter two papers introduced the view of boosting algorithms as iterativefunctional gradient descentalgorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction. This functional gradient view of boosting has led to the development of boosting algorithms in many areas of machine learning and statistics beyond regression and classification.
(This section follows the exposition by Cheng Li.[7])
Like other boosting methods, gradient boosting combines weak "learners" into a single strong learner iteratively. It is easiest to explain in theleast-squaresregressionsetting, where the goal is to teach a modelF{\displaystyle F}to predict values of the formy^=F(x){\displaystyle {\hat {y}}=F(x)}by minimizing themean squared error1n∑i(y^i−yi)2{\displaystyle {\tfrac {1}{n}}\sum _{i}({\hat {y}}_{i}-y_{i})^{2}}, wherei{\displaystyle i}indexes over some training set of sizen{\displaystyle n}of actual values of the output variabley{\displaystyle y}:
If the algorithm hasM{\displaystyle M}stages, at each stagem{\displaystyle m}(1≤m≤M{\displaystyle 1\leq m\leq M}), suppose some imperfect modelFm{\displaystyle F_{m}}(for lowm{\displaystyle m}, this model may simply predicty^i{\displaystyle {\hat {y}}_{i}}to bey¯{\displaystyle {\bar {y}}}, the mean ofy{\displaystyle y}). In order to improveFm{\displaystyle F_{m}}, our algorithm should add some new estimator,hm(x){\displaystyle h_{m}(x)}. Thus,
or, equivalently,
Therefore, gradient boosting will fithm{\displaystyle h_{m}}to theresidualyi−Fm(xi){\displaystyle y_{i}-F_{m}(x_{i})}. As in other boosting variants, eachFm+1{\displaystyle F_{m+1}}attempts to correct the errors of its predecessorFm{\displaystyle F_{m}}. A generalization of this idea toloss functionsother than squared error, and toclassification and ranking problems, follows from the observation that residualshm(xi){\displaystyle h_{m}(x_{i})}for a given model are proportional to the negative gradients of themean squared error (MSE)loss function (with respect toF(xi){\displaystyle F(x_{i})}):
So, gradient boosting could be generalized to agradient descentalgorithm by plugging in a different loss and its gradient.
Manysupervised learningproblems involve an output variableyand a vector of input variablesx, related to each other with some probabilistic distribution. The goal is to find some functionF^(x){\displaystyle {\hat {F}}(x)}that best approximates the output variable from the values of input variables. This is formalized by introducing someloss functionL(y,F(x)){\displaystyle L(y,F(x))}and minimizing it in expectation:
The gradient boosting method assumes a real-valuedy. It seeks an approximationF^(x){\displaystyle {\hat {F}}(x)}in the form of a weighted sum ofMfunctionshm(x){\displaystyle h_{m}(x)}from some classH{\displaystyle {\mathcal {H}}}, called base (orweak) learners:
whereγm{\displaystyle \gamma _{m}}is the weight at stagem{\displaystyle m}. We are usually given a training set{(x1,y1),…,(xn,yn)}{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}of known values ofxand corresponding values ofy. In accordance with theempirical risk minimizationprinciple, the method tries to find an approximationF^(x){\displaystyle {\hat {F}}(x)}that minimizes the average value of the loss function on the training set, i.e., minimizes the empirical risk. It does so by starting with a model, consisting of a constant functionF0(x){\displaystyle F_{0}(x)}, and incrementally expands it in agreedyfashion:
form≥1{\displaystyle m\geq 1}, wherehm∈H{\displaystyle h_{m}\in {\mathcal {H}}}is a base learner function.
Unfortunately, choosing the best functionhm{\displaystyle h_{m}}at each step for an arbitrary loss functionLis a computationally infeasible optimization problem in general. Therefore, we restrict our approach to a simplified version of the problem. The idea is to apply asteepest descentstep to this minimization problem (functional gradient descent). The basic idea is to find a local minimum of the loss function by iterating onFm−1(x){\displaystyle F_{m-1}(x)}. In fact, the local maximum-descent direction of the loss function is the negative gradient.[8]Hence, moving a small amountγ{\displaystyle \gamma }such that the linear approximation remains valid:
Fm(x)=Fm−1(x)−γ∑i=1n∇Fm−1L(yi,Fm−1(xi)){\displaystyle F_{m}(x)=F_{m-1}(x)-\gamma \sum _{i=1}^{n}{\nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))}}
whereγ>0{\displaystyle \gamma >0}. For smallγ{\displaystyle \gamma }, this implies thatL(yi,Fm(xi))≤L(yi,Fm−1(xi)){\displaystyle L(y_{i},F_{m}(x_{i}))\leq L(y_{i},F_{m-1}(x_{i}))}.
O=∑i=1nL(yi,Fm−1(xi)+hm(xi)){\displaystyle O=\sum _{i=1}^{n}{L(y_{i},F_{m-1}(x_{i})+h_{m}(x_{i}))}}
Doing a Taylor expansion around the fixed pointFm−1(xi){\displaystyle F_{m-1}(x_{i})}up to first orderO=∑i=1nL(yi,Fm−1(xi)+hm(xi))≈∑i=1nL(yi,Fm−1(xi))+hm(xi)∇Fm−1L(yi,Fm−1(xi))+…{\displaystyle O=\sum _{i=1}^{n}{L(y_{i},F_{m-1}(x_{i})+h_{m}(x_{i}))}\approx \sum _{i=1}^{n}{L(y_{i},F_{m-1}(x_{i}))+h_{m}(x_{i})\nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))}+\ldots }
Now differentiating w.r.t tohm(xi){\displaystyle h_{m}(x_{i})}, only the derivative of the second term remains∇Fm−1L(yi,Fm−1(xi)){\displaystyle \nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))}. This is the direction of steepest ascent and hence we must move in the opposite (i.e., negative) direction in order to move in the direction of steepest descent.
Furthermore, we can optimizeγ{\displaystyle \gamma }by finding theγ{\displaystyle \gamma }value for which the loss function has a minimum:
γm=argminγ∑i=1nL(yi,Fm(xi))=argminγ∑i=1nL(yi,Fm−1(xi)−γ∇Fm−1L(yi,Fm−1(xi))).{\displaystyle \gamma _{m}={\underset {\gamma }{\arg \min }}{\sum _{i=1}^{n}{L(y_{i},F_{m}(x_{i}))}}={\underset {\gamma }{\arg \min }}{\sum _{i=1}^{n}{L\left(y_{i},F_{m-1}(x_{i})-\gamma \nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))\right)}}.}
If we considered the continuous case, i.e., whereH{\displaystyle {\mathcal {H}}}is the set of arbitrary differentiable functions onR{\displaystyle \mathbb {R} }, we would update the model in accordance with the following equations
whereγm{\displaystyle \gamma _{m}}is the step length, defined asγm=argminγ∑i=1nL(yi,Fm−1(xi)−γ∇Fm−1L(yi,Fm−1(xi))).{\displaystyle \gamma _{m}={\underset {\gamma }{\arg \min }}{\sum _{i=1}^{n}{L\left(y_{i},F_{m-1}(x_{i})-\gamma \nabla _{F_{m-1}}L(y_{i},F_{m-1}(x_{i}))\right)}}.}In the discrete case however, i.e. when the setH{\displaystyle {\mathcal {H}}}is finite[clarification needed], we choose the candidate functionhclosest to the gradient ofLfor which the coefficientγmay then be calculated with the aid ofline searchon the above equations. Note that this approach is a heuristic and therefore doesn't yield an exact solution to the given problem, but rather an approximation.
In pseudocode, the generic gradient boosting method is:[4][1]
Input: training set{(xi,yi)}i=1n,{\displaystyle \{(x_{i},y_{i})\}_{i=1}^{n},}a differentiable loss functionL(y,F(x)),{\displaystyle L(y,F(x)),}number of iterationsM.
Algorithm:
Gradient boosting is typically used withdecision trees(especiallyCARTs) of a fixed size as base learners. For this special case, Friedman proposes a modification to gradient boosting method which improves the quality of fit of each base learner.
Generic gradient boosting at them-th step would fit a decision treehm(x){\displaystyle h_{m}(x)}to pseudo-residuals. LetJm{\displaystyle J_{m}}be the number of its leaves. The tree partitions the input space intoJm{\displaystyle J_{m}}disjoint regionsR1m,…,RJmm{\displaystyle R_{1m},\ldots ,R_{J_{m}m}}and predicts a constant value in each region. Using theindicator notation, the output ofhm(x){\displaystyle h_{m}(x)}for inputxcan be written as the sum:
wherebjm{\displaystyle b_{jm}}is the value predicted in the regionRjm{\displaystyle R_{jm}}.[9]
Then the coefficientsbjm{\displaystyle b_{jm}}are multiplied by some valueγm{\displaystyle \gamma _{m}}, chosen using line search so as to minimize the loss function, and the model is updated as follows:
Friedman proposes to modify this algorithm so that it chooses a separate optimal valueγjm{\displaystyle \gamma _{jm}}for each of the tree's regions, instead of a singleγm{\displaystyle \gamma _{m}}for the whole tree. He calls the modified algorithm "TreeBoost". The coefficientsbjm{\displaystyle b_{jm}}from the tree-fitting procedure can be then simply discarded and the model update rule becomes:
When the lossL(⋅,⋅){\displaystyle L(\cdot ,\cdot )}is mean-squared error (MSE) the coefficientsγjm{\displaystyle \gamma _{jm}}coincide with the coefficients of the tree-fitting procedurebjm{\displaystyle b_{jm}}.
The numberJ{\displaystyle J}of terminal nodes in the trees is a parameter which controls the maximum allowed level ofinteractionbetween variables in the model. WithJ=2{\displaystyle J=2}(decision stumps), no interaction between variables is allowed. WithJ=3{\displaystyle J=3}the model may include effects of the interaction between up to two variables, and so on.J{\displaystyle J}can be adjusted for a data set at hand.
Hastie et al.[1]comment that typically4≤J≤8{\displaystyle 4\leq J\leq 8}work well for boosting and results are fairly insensitive to the choice ofJ{\displaystyle J}in this range,J=2{\displaystyle J=2}is insufficient for many applications, andJ>10{\displaystyle J>10}is unlikely to be required.
Fitting the training set too closely can lead to degradation of the model's generalization ability, that is, its performance on unseen examples. Several so-calledregularizationtechniques reduce thisoverfittingeffect by constraining the fitting procedure.
One natural regularization parameter is the number of gradient boosting iterationsM(i.e. the number of base models). IncreasingMreduces the error on training set, but increases risk of overfitting. An optimal value ofMis often selected by monitoring prediction error on a separate validation data set.
Another regularization parameter for tree boosting is tree depth. The higher this value the more likely the model will overfit the training data.
An important part of gradient boosting is regularization by shrinkage which uses a modified update rule:
whereparameterν{\displaystyle \nu }is called the "learning rate".
Empirically, it has been found that using smalllearning rates(such asν<0.1{\displaystyle \nu <0.1}) yields dramatic improvements in models' generalization ability over gradient boosting without shrinking (ν=1{\displaystyle \nu =1}).[1]However, it comes at the price of increasingcomputational timeboth during training andquerying: lower learning rate requires more iterations.
Soon after the introduction of gradient boosting, Friedman proposed a minor modification to the algorithm, motivated byBreiman'sbootstrap aggregation("bagging") method.[2]Specifically, he proposed that at each iteration of the algorithm, a base learner should be fit on a subsample of the training set drawn at random without replacement.[10]Friedman observed a substantial improvement in gradient boosting's accuracy with this modification.
Subsample size is some constant fractionf{\displaystyle f}of the size of the training set. Whenf=1{\displaystyle f=1}, the algorithm is deterministic and identical to the one described above. Smaller values off{\displaystyle f}introduce randomness into the algorithm and help preventoverfitting, acting as a kind ofregularization. The algorithm also becomes faster, because regression trees have to be fit to smaller datasets at each iteration. Friedman[2]obtained that0.5≤f≤0.8{\displaystyle 0.5\leq f\leq 0.8}leads to good results for small and moderate sized training sets. Therefore,f{\displaystyle f}is typically set to 0.5, meaning that one half of the training set is used to build each base learner.
Also, like in bagging, subsampling allows one to define anout-of-bag errorof the prediction performance improvement by evaluating predictions on those observations which were not used in the building of the next base learner. Out-of-bag estimates help avoid the need for an independent validation dataset, but often underestimate actual performance improvement and the optimal number of iterations.[11][12]
Gradient tree boosting implementations often also use regularization by limiting the minimum number of observations in trees' terminal nodes. It is used in the tree building process by ignoring any splits that lead to nodes containing fewer than this number of training set instances.
Imposing this limit helps to reduce variance in predictions at leaves.
Another useful regularization technique for gradient boosted model is to penalize its complexity.[13]For gradient boosted trees, model complexity can be defined as the proportional[clarification needed]number of leaves in the trees. The joint optimization of loss and model complexity corresponds to a post-pruning algorithm to remove branches that fail to reduce the loss by a threshold.
Other kinds of regularization such as anℓ2{\displaystyle \ell _{2}}penalty on the leaf values can also be used to avoidoverfitting.
Gradient boosting can be used in the field oflearning to rank. The commercial web search enginesYahoo[14]andYandex[15]use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis. At the Large Hadron Collider (LHC), variants of gradient boosting Deep Neural Networks (DNN) were successful in reproducing the results of non-machine learning methods of analysis on datasets used to discover theHiggs boson.[16]Gradient boosting decision tree was also applied in earth and geological studies – for example quality evaluation of sandstone reservoir.[17]
The method goes by a variety of names. Friedman introduced his regression technique as a "Gradient Boosting Machine" (GBM).[4]Mason, Baxter et al. described the generalized abstract class of algorithms as "functional gradient boosting".[5][6]Friedman et al. describe an advancement of gradient boosted models as Multiple Additive Regression Trees (MART);[18]Elith et al. describe that approach as "Boosted Regression Trees" (BRT).[19]
A popular open-source implementation forRcalls it a "Generalized Boosting Model",[11]however packages expanding this work use BRT.[20]Yet another name is TreeNet, after an early commercial implementation from Salford System's Dan Steinberg, one of researchers who pioneered the use of tree-based methods.[21]
Gradient boosting can be used for feature importance ranking, which is usually based on aggregating importance function of the base learners.[22]For example, if a gradient boosted trees algorithm is developed using entropy-baseddecision trees, the ensemble algorithm ranks the importance of features based on entropy as well with the caveat that it is averaged out over all base learners.[22][1]
While boosting can increase the accuracy of a base learner, such as a decision tree or linear regression, it sacrifices intelligibility andinterpretability.[22][23]For example, following the path that a decision tree takes to make its decision is trivial and self-explained, but following the paths of hundreds or thousands of trees is much harder. To achieve both performance and interpretability, some model compression techniques allow transforming an XGBoost into a single "born-again" decision tree that approximates the same decision function.[24]Furthermore, its implementation may be more difficult due to the higher computational demand.
|
https://en.wikipedia.org/wiki/Gradient_boosting
|
Trench codes(a form ofcryptography) werecodesused for secrecy by field armies inWorld War I.[1][2]Messages by field telephone, radio and carrier pigeons could be intercepted, hence the need for tacticalWorld War I cryptography. Originally, the most commonly used codes were simple substitution codes, but due to the relative vulnerability of theclassical cipher, trench codes came into existence. (Important messages generally used alternative encryption techniques for greater security.) The use of these codes required the distribution ofcodebooksto military personnel, which proved to be a security liability since these books could be stolen by enemy forces.[3]
By the middle ofWorld War Ithe conflict had settled down into a static battle of attrition, with the two sides sitting in huge lines of fixed earthwork fortifications. With armies generally immobile, distributing codebooks and protecting them was easier than it would have been for armies on the move. However, armies were still in danger of trench-raiding parties who would sneak into enemy lines and try to snatch codebooks. When this happened, an alarm could be raised and a code quickly changed. Trench codes were changed on a regular basis in an attempt to prevent code breakers from deciphering messages.[1]
TheFrench Armybegan to develop trench codes in early 1916. They started astelephonecodes, implemented at the request of ageneralwhose forces had suffered devastatingartillery barragesdue to indiscretions in telephone conversations between his men. The original telephone code featured a small set of two-letter codewords that were spelled out in voice communications. This grew into a three-letter code scheme, soon adopted for wireless, with early one-part code implementations evolving into more secure two-part code implementations. TheBritishbegan to adopt trench codes as well.
TheImperial German Armystarted using trench codes in the spring of 1917, evolving into a book of 4,000 codewords that were changed twice a month, with different codebooks used on different sectors of the front. French codebreakers were extremely competent at crackingciphersbut were somewhat inexperienced at cracking codes, which require a slightly different mindset. It took them time to get to the point where they were able to crack the German codes in a timely fashion.
The Americans were relative newcomers to cryptography when they entered the war, but they did have their star players. One wasParker Hitt, b. 1878, who before the war had been anArmy Signal Corpsinstructor. He was one of the first to try to bringUS Armycryptology into the 20th century, publishing an influential short work on the subject in 1915 called theManual for the solution of military ciphers.[4]He was assigned to France in an administrative role, but his advice was eagerly sought by colleagues working in operational cryptology. Another Signal Corps officer who would make his mark on cryptology wasJoseph Mauborgne, who in 1914, as afirst lieutenant, had been the first to publish a solution to thePlayfair cipher.
When the Americans began moving up to the front in numbers in early 1918, they adopted trench codes[1]: p. 222and became very competent at their construction, with aCaptainHoward R. Barneseventually learning to produce them at a rate that surprised British colleagues. The Americans adopted a series of codes named after rivers, beginning with "Potomac". They learned to print the codebooks on paper that burned easily and degraded quickly after a few weeks, when the codes would presumably be obsolete, while using a typeface that was easy to read under trench conditions.
American code makers were often frustrated by the inability or refusal of combat units to use the codes—or worse, to use them properly. Lt. Col. Frank Moorman, reviewing U.S wireless intelligence in 1920. wrote:
That will be the real problem for the future, to make the men at the front realize the importance of handling codes carefully and observing "foolish" little details that the code man insists on. They cannot see the need of it and they do not want to do it. They will do anything they can to get out of it. My idea would be to hang a few of the offenders. This would not only get rid of some but would discourage the development of others. It would be a saving of lives to do it. It is a sacrifice of American lives to unnecessarily assist the enemy in the solution of our code.[1]: p.269
Below are pages from a U.S. Army World War I trench code, an edition designated as "Seneca:"[1]: pp.185–188
|
https://en.wikipedia.org/wiki/Trench_code
|
Japanoperates a number of centers forsupercomputingwhich hold world records in speed, with theK computerbeing the world's fastest from June 2011 to June 2012,[1][2][3]andFugakuholding the lead from June 2020 until June 2022.
The K computer's performance was impressive, according to professorJack Dongarrawho maintains theTOP500list ofsupercomputers, and it surpassed its next 5 competitors combined.[1]The K computer cost US$10 million a year to operate.[1]
Japan's entry into supercomputing began in the early 1980s. In 1982,Osaka University's LINKS-1 Computer Graphics System used amassively parallelprocessing architecture, with 514microprocessors, including 257Zilog Z8001control processorsand 257iAPX86/20(the pairing of an 8086 with an 8087 FPU)floating-point processors. It was mainly used for rendering realistic3Dcomputer graphics.[4]It was claimed by the designers to be the world's most powerful computer, as of 1984.[5]
TheSX-3 supercomputerfamily was developed byNEC Corporationand announced in April 1989.[6]The SX-3/44R became the fastest supercomputer in the world in 1990. Fujitsu'sNumerical Wind Tunnelsupercomputer gained the top spot in 1993. Except for the Sandia National Laboratories' win in June 1994, Japanese supercomputers continued to top theTOP500lists up until 1997.[7]
The K computer's placement on the top spot was seven years after Japan held the title in 2004.[1][2]NEC'sEarth Simulatorsupercomputer built byNECat the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) was the fastest in the world at that time. It used 5,120NEC SX-6iprocessors, generating a performance of 28,293,540MIPS(millioninstructionsper second).[8]It also had a peak performance of 131TFLOPS(131trillionfloating-pointoperations per second), using proprietaryvector processingchips.
TheK computerused over 60,000 commercialscalarSPARC64 VIIIfxprocessors housed in over 600 cabinets. The fact thatK computerwas over 60 times faster than the Earth Simulator, and that the Earth Simulator ranked as the 68th system in the world 7 years after holding the top spot, demonstrates both the rapid increase in top performance in Japan and the widespread growth of supercomputing technology worldwide.
The GSIC Center at theTokyo Institute of Technologyhouses theTsubame2.0 supercomputer, which has a peak of 2,288TFLOPSand in June 2011 ranked 5th in the world.[9]It was developed at the Tokyo Institute of Technology in collaboration withNECandHP, and has 1,400 nodes using both HP Proliant and NVIDIA Tesla processors.[10]
TheRIKEN MDGRAPE-3for molecular dynamics simulations of proteins is a special purpose petascale supercomputer at the Advanced Center for Computing and Communication,RIKENinWakō, Saitama, just outside Tokyo. It uses over 4,800 custom MDGRAPE-3 chips, as well asIntel Xeonprocessors.[11]However, given that it is a special purpose computer, it can not appear on theTOP500list which requiresLinpackbenchmarking.
The next significant system isJapan Atomic Energy Agency's PRIMERGY BX900Fujitsusupercomputer. It is significantly slower, reaching 200 TFLOPS and ranking as the 38th in the world in 2011.[12][13]
Historically, theGravity Pipe(GRAPE) system forastrophysicsat theUniversity of Tokyowas distinguished not by its top speed of 64 Tflops, but by its cost and energy efficiency, having won theGordon Bell Prizein 1999, at about $7 per megaflops, using special purpose processing elements.[14]
DEGIMAis a highly cost and energy-efficient computer cluster at the Nagasaki Advanced Computing Center,Nagasaki University. It is used for hierarchicalN-body simulationsand has a peak performance of 111 TFLOPS with an energy efficiency of 1376 MFLOPS/watt. The overall cost of the hardware was approximately US$500,000.[15][16]
The Computational Simulation Centre, International Fusion Energy Research Centre of theITERBroader Approach[17]/Japan Atomic Energy Agencyoperates a 1.52 PFLOPS supercomputer (currently operating at 442 TFLOPS) inRokkasho, Aomori. The system, called Helios (Roku-chan in Japanese), consists of 4,410Groupe Bullbullx B510 compute blades, and is used forfusionsimulation projects.
The University of Tokyo's Information Technology Center inKashiwa, Chiba, began operating Oakleaf-FX in April 2012. This supercomputer is a FujitsuPRIMEHPC FX10(a commercial version of theK computer) configured with 4,800 compute nodes for a peak performance of 1.13 PFLOPS. Each of the compute nodes is aSPARC64 IXfxprocessor connected to other nodes via a six-dimensional mesh/torus interconnect.[18]
In June 2012, the Numerical Prediction Division, Forecast Department of theJapan Meteorological Agencydeployed an 847 TFLOPSHitachiSR16000/M1 supercomputer, which is based on theIBMPower 775, at the Office of Computer Systems Operations and the Meteorological Satellite Center inKiyose, Tokyo.[19]The system consists of two SR16000/M1s, each a cluster of 432-logical nodes. Each node consists of four 3.83 GHz IBMPOWER7processors and 128 GB of memory. The system is used to run a high-resolution (2 km horizontally and 60 layers vertically, up to 9-hour forecast) local weather forecast model every hour.
Starting in 2003, Japan usedgrid computingin the National Research Grid Initiative (NAREGI) project to develop high-performance, scalable grids over very high-speed networks as a future computational infrastructure for scientific and engineering research.[20]
|
https://en.wikipedia.org/wiki/Supercomputing_in_Japan
|
Inproject management,scopeis the defined features and functions of a product, or the scope of work needed to finish a project.[1]Scope involves getting information required to start a project, including the features the product needs to meet its stakeholders' requirements.[2][3]: 116
Project scope is oriented towards the work required and methods needed, while product scope is more oriented towardfunctional requirements. If requirements are not completely defined and described and if there is no effectivechange controlin a project,scope or requirement creepmay ensue.[4][5]: 434[3]: 13
Scope managementis the process of defining,[3]: 481–483and managing the scope of a project to ensure that it stays on track, within budget, and meets the expectations of stakeholders.
|
https://en.wikipedia.org/wiki/Scope_(project_management)
|
Science and technology studies(STS) orscience, technology, and societyis aninterdisciplinaryfield that examines the creation, development, and consequences ofscienceandtechnologyin their historical, cultural, and social contexts.[1]
Like mostinterdisciplinaryfields of study, STS emerged from the confluence of a variety of disciplines and disciplinary subfields, all of which had developed an interest—typically, during the 1960s or 1970s—in viewing science and technology as socially embedded enterprises.[2]The key disciplinary components of STS took shape independently, beginning in the 1960s, and developed in isolation from each other well into the 1980s, althoughLudwik Fleck's (1935) monographGenesis and Development of a Scientific Factanticipated many of STS's key themes. In the 1970sElting E. Morisonfounded the STS program at theMassachusetts Institute of Technology(MIT), which served as a model. By 2011, 111 STS research centers and academic programs were counted worldwide.[3]
"The mid-70s was a sort of formation period, and the early 1990s as a peak of consolidation, and then the 2000s as a period of global diffusion" (Sheila Jasanoff)[4].
During the 1970s and 1980s, universities in the US, UK, and Europe began drawing these various components together in new, interdisciplinary programs. For example, in the 1970s,Cornell Universitydeveloped a new program that unitedscience studiesand policy-oriented scholars with historians and philosophers of science and technology. Each of these programs developed unique identities due to variations in the components that were drawn together, as well as their location within the various universities. For example, the University of Virginia's STS program united scholars drawn from a variety of fields (with particular strength in the history of technology); however, the program's teaching responsibilities—it is located within an engineering school and teaches ethics to undergraduate engineering students—means that all of its faculty share a strong interest inengineering ethics.[6]
A decisive moment in the development of STS was the mid-1980s addition of technology studies to the range of interests reflected in science. During that decade, two works appeareden seriatimthat signaled whatSteve Woolgarwas to call the "turn to technology".[7]In a seminal 1984 article,Trevor PinchandWiebe Bijkershowed how the sociology of technology could proceed along the theoretical and methodological lines established by the sociology of scientific knowledge.[8]This was the intellectual foundation of the field they called the social construction of technology. Donald MacKenzie andJudy Wajcmanprimed the pump by publishing a collection of articles attesting to the influence of society on technological design (Social Shaping of Technology, 1985).[9]Social science research continued to interrogate STS research from this point onward as researchers moved from post-modern to post-structural frameworks of thought, Bijker and Pinch contributing to SCOT knowledge and Wajcman providing boundary work through a feminist lens.[10]
The "turn to technology" helped to cement an already growing awareness of underlying unity among the various emerging STS programs. More recently, there has been an associated turn to ecology, nature, and materiality in general, whereby the socio-technical and natural/material co-produce each other. This is especially evident in work in STS analyses of biomedicine (such asCarl MayandAnnemarie Mol) and ecological interventions (such asBruno Latour,Sheila Jasanoff,Matthias Gross,Sara B. Pritchard, andS. Lochlann Jain).
Social constructions are human-created ideas, objects, or events created by a series of choices and interactions.[11]These interactions have consequences that change the perception that different groups of people have on these constructs. Some examples of social construction include class, race, money, and citizenship.
The following also alludes to the notion that not everything is set, a circumstance or result could potentially be one way or the other. According to the article "What is Social Construction?" by Ian Hacking, "Social construction work is critical of the status quo. Social constructionists about X tend to hold that:
Very often they go further, and urge that:
In the past, there have been viewpoints that were widely regarded as fact until being called to question due to the introduction of new knowledge. Such viewpoints include the past concept of a correlation between intelligence and the nature of a human's ethnicity or race (X may not be at all as it is).[12]
An example of the evolution and interaction of various social constructions within science and technology can be found in the development of both the high-wheel bicycle, orvelocipede, and then of thebicycle. The velocipede was widely used in the latter half of the 19th century. In the latter half of the 19th century, a social need was first recognized for a more efficient and rapid means of transportation. Consequently, the velocipede was first developed, which was able to reach higher translational velocities than the smaller non-geared bicycles of the day, by replacing the front wheel with a larger radius wheel. One notable trade-off was a certain decreased stability leading to a greater risk of falling. This trade-off resulted in many riders getting into accidents by losing balance while riding the bicycle or being thrown over the handlebars.
The first "social construction" or progress of the velocipede caused the need for a newer "social construction" to be recognized and developed into a safer bicycle design. Consequently, the velocipede was then developed into what is now commonly known as the "bicycle" to fit within society's newer "social construction," the newer standards of higher vehicle safety. Thus the popularity of the modern geared bicycle design came as a response to the first social construction, the original need for greater speed, which had caused the high-wheel bicycle to be designed in the first place. The popularity of the modern geared bicycle design ultimately ended the widespread use of the velocipede itself, as eventually it was found to best accomplish the social needs/social constructions of both greater speed and of greater safety.[13]
With methodology from ANT, feminist STS theorists built upon SCOT's theory of co-construction to explore the relationship between gender and technology, proposing one cannot exist separately from the other.[10]This approach suggests the material and social are not separate, reality being produced through interactions and studied through representations of those realities.[10]Building onSteve Woolgar's boundary work on user configuration,[14]feminist critiques shifted the focus away from users of technology and science towards whether technology and science represent a fixed, unified reality.[15]According to this approach, identity could no longer be treated as causal in human interactions with technology as it cannot exist prior to that interaction, feminist STS researchers proposing a "double-constructivist" approach to account for this contradiction.[16]John Lawcredits feminist STS scholars for contributing material-semiotic approaches to the broader discipline of STS, stating that research not only attempts to describe reality, but enacts it through the research process.[10]
Sociotechnical imaginaries are what certain communities, societies, and nations envision as achievable through the combination of scientific innovation and social changes. These visions can be based on what is possible to achieve for a certain society, and can also show what a certain state or nation desires.[17]STIs are often bound with ideologies and ambitions of those who create and circulate them. Sociotechnical imaginaries can be created by states and policymakers, smaller groups within society, or can be a result of the interaction of both.[17]
The term was coined in 2009 bySheila Jasanoffand Sang-Hyun Kim who compared and contrasted sociotechnical imaginaries of nuclear energy in theUSAwith those ofSouth Koreaover the second half of the 20th century.[17]Jasanoff and Kim analyzed the discourse of government representatives, national policies, and civil society organizations, looked at the technological and infrastructural developments, and social protests, and conducted interviews with experts. They concluded that in South Korea nuclear energy was imagined mostly as the means of national development, while in the US the dominant sociotechnical imaginary framed nuclear energy as risky and in need of containment.[17]
The concept has been applied to several objects of study including biomedical research,[18][19]nanotechnology development[20]and energy systems and climate change.[21][22][23][24][25][17]Within energy systems, research has focused on nuclear energy,[17]fossil fuels,[22][25]renewables[21]as well as broader topics of energy transitions,[23]and the development of new technologies to address climate change.[24]
Social technical systems are an interplay between technologies and humans, this is clearly expressed in thesociotechnical systems theory. To expound on this interplay, humans fulfill and define tasks, then humans in companies use IT and IT supports people, and finally, IT processes tasks and new IT generates new tasks. This IT redefines work practices. This is what we call the sociotechnical systems.[26]In socio-technical systems, there are two principles to internalize, that is joint optimization and complementarity. Joint optimization puts an emphasis on developing both systems in parallel and it is only in the interaction of both systems that the success of an organization arises.[26]The principle of complementarity means that both systems have to be optimized.[26]If you focus on one system and have bias over the other it will likely lead to the failure of the organization or jeopardize the success of a system. Although the above socio-technical system theory is focused on an organization, it is undoubtedly imperative to correlate this theory and its principles to society today and in science and technology studies.
According to Barley and Bailey, there is a tendency for AI designers and scholars of design studies to privilege the technical over the social, focusing more on taking "humans out of the loop" paradigm than the "augmented intelligence" paradigm.[27]
Recent work onartificial intelligenceconsiders large sociotechnical systems, such associal networksandonline marketplaces, as agents whose behavior can be purposeful and adaptive. The behavior ofrecommender systemscan therefore be analyzed in the language and framework of sociotechnical systems, leading also to a new perspective for their legal regulation.[28][29]
Technoscience is a subset of Science, Technology, and Society studies that focuses on the inseparable connection between science and technology. It states that fields are linked and grow together, and scientific knowledge requires an infrastructure of technology in order to remain stationary or move forward. Both technological development and scientific discovery drive one another towards more advancement. Technoscience excels at shaping human thoughts and behavior by opening up new possibilities that gradually or quickly come to be perceived as necessities.[30]
"Technological action is a social process."[31]Social factors and technology are intertwined so that they are dependent upon each other. This includes the aspect that social, political, and economic factors are inherent in technology and that social structure influences what technologies are pursued. In other words, "technoscientific phenomena combined inextricably with social/political/economic/psychological phenomena, so 'technology' includes a spectrum of artifacts, techniques, organizations, and systems."[32]Winner expands on this idea by saying "in the late twentieth-century technology and society, technology and culture, technology and politics are by no means separate."[33]
Deliberative democracyis a reform ofrepresentativeordirectdemocracies which mandates discussion and debate of popular topics which affect society. Deliberative democracy is a tool for making decisions. Deliberative democracy can be traced back all the way toAristotle's writings. More recently, the term was coined by Joseph Bessette in his 1980 workDeliberative Democracy: The Majority Principle in Republican Government, where he uses the idea in opposition to the elitist interpretations of theUnited States Constitutionwith emphasis on public discussion.[35]
Deliberative democracy can lead to more legitimate, credible, and trustworthy outcomes. Deliberative democracy allows for "a wider range of public knowledge", and it has been argued that this can lead to "more socially intelligent and robust" science. One major shortcoming of deliberative democracy is that many models insufficiently ensure critical interaction.[36]
According to Ryfe, there are five mechanisms that stand out as critical to the successful design of deliberative democracy:
Recently,[when?]there has been a movement towards greater transparency in the fields of policy and technology. Jasanoff comes to the conclusion that there is no longer a question of if there needs to be increased public participation in making decisions about science and technology, but now there need to be ways to make a more meaningful conversation between the public and those developing the technology.[38]
Bruce AckermanandJames S. Fishkinoffered an example of a reform in their paper "Deliberation Day." The deliberation is to enhance public understanding of popular, complex and controversial issues through devices such as Fishkin'sdeliberative polling,[39]though implementation of these reforms is unlikely in a large government such as that of the United States. However, things similar to this have been implemented in small, local governments likeNew Englandtowns and villages. New England town hall meetings are a good example ofdeliberative democracyin a realistic setting.[35]
An ideal deliberative democracy balances the voice and influence of all participants. While the main aim is to reach consensus, deliberative democracy should encourage the voices of those with opposing viewpoints, concerns due to uncertainties, and questions about assumptions made by other participants. It should take its time and ensure that those participating understand the topics on which they debate. Independent managers of debates should also have a substantial grasp of the concepts discussed, but must "[remain] independent and impartial as to the outcomes of the process."[36]
In 1968,Garrett Hardinpopularised the phrase "tragedy of the commons." It is an economic theory where rational people act against the best interest of the group by consuming a common resource. Since then, the tragedy of the commons has been used to symbolize the degradation of the environment whenever many individuals use a common resource. Although Garrett Hardin was not an STS scholar, the concept of the tragedy of the commons still applies to science, technology, and society.[40]
In a contemporary setting, the Internet acts as an example of the tragedy of the commons through the exploitation of digital resources and private information. Data and internet passwords can be stolen much more easily than physical documents. Virtual spying is almost free compared to the costs of physical spying.[41]Additionally,net neutralitycan be seen as an example of tragedy of the commons in an STS context. The movement for net neutrality argues that the Internet should not be a resource that is dominated by one particular group, specifically those with more money to spend on Internet access.
A counterexample to the tragedy of the commons is offered by Andrew Kahrl. Privatization can be a way to deal with the tragedy of the commons. However, Kahrl suggests that the privatization of beaches onLong Island, in an attempt to combat the overuse of Long Island beaches, made the residents of Long Island more susceptible to flood damage fromHurricane Sandy. The privatization of these beaches took away from the protection offered by the natural landscape. Tidal lands that offer natural protection were drained and developed. This attempt to combat the tragedy of the commons by privatization was counter-productive. Privatization actually destroyed the public good of natural protection from the landscape.[42]
Alternativemodernity[43][44]is a conceptual tool conventionally used to represent the state of present western society. Modernity represents the political and social structures of society, the sum of interpersonal discourse, and ultimately a snapshot of society's direction at a point in time. Unfortunately, conventional modernity is incapable of modeling alternative directions for further growth within our society. Also, this concept is ineffective at analyzing similar but unique modern societies such as those found in the diverse cultures of the developing world. Problems can be summarized into two elements: inward failure to analyze the growth potentials of a given society, and outward failure to model different cultures and social structures and predict their growth potentials.
Previously, modernity carried a connotation of the current state of being modern, and its evolution through European colonialism. The process of becoming "modern" is believed to occur in a linear, pre-determined way, and is seen by Philip Brey as a way to interpret and evaluate social and cultural formations. This thought ties in withmodernization theory, the thought that societies progress from "pre-modern" to "modern" societies.
Within the field of science and technology, there are two main lenses with which to view modernity. The first is as a way for society to quantify what it wants to move towards. In effect, we can discuss the notion of "alternative modernity" (as described by Andrew Feenberg) and which of these we would like to move towards. Alternatively, modernity can be used to analyze the differences in interactions between cultures and individuals. From this perspective, alternative modernities exist simultaneously, based on differing cultural and societal expectations of how a society (or an individual within society) should function. Because of different types of interactions across different cultures, each culture will have a different modernity.
The pace of innovation is the speed at which technological innovation or advancement is occurring, with the most apparent instances being too slow or too rapid. Both these rates of innovation are extreme and therefore have effects on the people that get to use this technology.
"No innovation without representation" is a democratic ideal of ensuring that everyone involved gets a chance to be represented fairly in technological developments.
Legacy thinking is defined as an inherited method of thinking imposed from an external source without objection by the individual because it is already widely accepted by society.
Legacy thinking can impair the ability to drive technology for the betterment of society by blinding people to innovations that do not fit into their accepted model of how society works. By accepting ideas without questioning them, people often see all solutions that contradict these accepted ideas as impossible or impractical. Legacy thinking tends to advantage the wealthy, who have the means to project their ideas on the public. It may be used by the wealthy as a vehicle to drive technology in their favor rather than for the greater good.
Examining the role of citizen participation and representation in politics provides an excellent example of legacy thinking in society. The belief that one can spend money freely to gain influence has been popularized, leading to public acceptance of corporatelobbying. As a result, a self-established role in politics has been cemented where the public does not exercise the power ensured to them by the Constitution to the fullest extent. This can become a barrier to political progress as corporations who have the capital to spend have the potential to wield great influence over policy.[48]Legacy thinking, however, keeps the population from acting to change this, despite polls from Harris Interactive that report over 80% of Americans to feel that big business holds too much power in government.[49]Therefore, Americans are beginning to try to steer away from this line of thought, rejecting legacy thinking, and demanding less corporate, and more public, participation in political decision-making.
Additionally, an examination ofnet neutralityfunctions as a separate example of legacy thinking. Starting withdial-up, the internet has always been viewed as a private luxury good.[50][51]Internet today is a vital part of modern-day society members. They use it in and out of life every day.[52]Corporations are able to mislabel and greatly overcharge for their internet resources. Since the American public is so dependent upon the internet there is little for them to do. Legacy thinking has kept this pattern on track despite growing movements arguing that the internet should be considered a utility. Legacy thinking prevents progress because it was widely accepted by others before us through advertising that the internet is a luxury and not a utility. Due to pressure from grassroots movements theFederal Communications Commission(FCC) has redefined the requirements for broadband and internet in general as a utility.[52]Now AT&T and other major internet providers are lobbying against this action and are in large able to delay the onset of this movement due to legacy thinking's grip on American[specify]culture and politics.
For example, those who cannot overcome the barrier of legacy thinking may not consider theprivatization of clean drinking wateras an issue.[53]This is partial because access to water has become such a given fact of the matter to them. For a person living in such circumstances, it may be widely accepted to not concern themselves with drinking water because they have not needed to be concerned with it in the past. Additionally, a person living within an area that does not need to worry about their water supply or the sanitation of their water supply is less likely to be concerned with the privatization of water.
This notion can be examined through the thought experiment of "veil of ignorance".[54]Legacy thinking causes people to be particularly ignorant about the implications behind the "you get what you pay for" mentality applied to a life necessity. By utilizing the "veil of ignorance", one can overcome the barrier of legacy thinking as it requires a person to imagine that they are unaware of their own circumstances, allowing them to free themselves from externally imposed thoughts or widely accepted ideas.
STS is taught in several countries. According to the STS wiki, STS programs can be found in twenty countries, including 45 programs in the United States, three programs in India, and eleven programs in the UK.[60]STS programs can be found inCanada,[61]Germany,[62][63]Israel,[64]Malaysia,[65]andTaiwan.[66]Some examples of institutions offering STS programs areStanford University,[67]University College London,[68]Harvard University,[69]theUniversity of Oxford,[70]Mines ParisTech,[71]Bar-Ilan University,[72]andYork University.[61]In Europe the European Inter-University Association on Society, Science and Technology (ESST) offers an MA degree in STS through study programs and student exchanges with over a dozen specializations.
The field has professional associations in regions and countries around the world.
Notable peer-reviewed journals in STS include:
Student journals in STS include:
|
https://en.wikipedia.org/wiki/Science_and_technology_studies
|
Incomputing(specificallydata transmissionanddata storage), ablock,[1]sometimes called aphysical record, is a sequence ofbytesorbits, usually containing some whole number ofrecords, having a fixed length; ablock size.[2]Data thusstructuredare said to beblocked. The process of putting data into blocks is calledblocking, whiledeblockingis the process of extracting data from blocks. Blocked data is normally stored in adata buffer, and read or written a whole block at a time. Blocking reduces theoverheadand speeds up the handling of thedata stream.[3]For some devices, such as magnetic tape andCKD disk devices, blocking reduces the amount of external storage required for the data. Blocking is almost universally employed when storing data to9-trackmagnetic tape, NANDflash memory, and rotating media such asfloppy disks,hard disks, andoptical discs.
Mostfile systemsare based on ablock device, which is a level ofabstractionfor the hardware responsible for storing and retrieving specified blocks of data, though the block size in file systems may be a multiple of the physical block size. This leads to space inefficiency due tointernal fragmentation, since file lengths are often not integer multiples of block size, and thus the last block of a file may remain partially empty. This will createslack space. Some newer file systems, such asBtrfsandFreeBSDUFS2, attempt to solve this through techniques calledblock suballocation and tail merging. Other file systems such asZFSsupport variable block sizes.[4][5]
Block storage is normally abstracted by a file system ordatabase management system(DBMS) for use by applications and end users. The physical or logical volumes accessed viablock I/Omay be devices internal to a server, directly attached viaSCSIorFibre Channel, or distant devices accessed via astorage area network(SAN) using a protocol such asiSCSI, orAoE. DBMSes often use their own block I/O for improved performance and recoverability as compared to layering the DBMS on top of a file system.
On Linux the default block size for most file systems is 4096 bytes. Thestatcommand part ofGNU Core Utilitiescan be used to check the block size.
InRusta block can be read with theread_exactmethod.[6]
InPythona block can be read with thereadmethod.
InC#a block can be read with theFileStreamclass.[7]
|
https://en.wikipedia.org/wiki/Block_storage
|
In mathematics, specifically inspectral theory, aneigenvalueof aclosed linear operatoris callednormalif the space admits a decomposition into a direct sum of a finite-dimensionalgeneralized eigenspaceand aninvariant subspacewhereA−λI{\displaystyle A-\lambda I}has a bounded inverse.
The set of normal eigenvalues coincides with thediscrete spectrum.
LetB{\displaystyle {\mathfrak {B}}}be aBanach space. Theroot linealLλ(A){\displaystyle {\mathfrak {L}}_{\lambda }(A)}of a linear operatorA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}with domainD(A){\displaystyle {\mathfrak {D}}(A)}corresponding to the eigenvalueλ∈σp(A){\displaystyle \lambda \in \sigma _{p}(A)}is defined as
whereIB{\displaystyle I_{\mathfrak {B}}}is the identity operator inB{\displaystyle {\mathfrak {B}}}.
This set is alinear manifoldbut not necessarily avector space, since it is not necessarily closed inB{\displaystyle {\mathfrak {B}}}. If this set is closed (for example, when it is finite-dimensional), it is called thegeneralized eigenspaceofA{\displaystyle A}corresponding to the eigenvalueλ{\displaystyle \lambda }.
Aneigenvalueλ∈σp(A){\displaystyle \lambda \in \sigma _{p}(A)}of aclosed linear operatorA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}in theBanach spaceB{\displaystyle {\mathfrak {B}}}withdomainD(A)⊂B{\displaystyle {\mathfrak {D}}(A)\subset {\mathfrak {B}}}is callednormal(in the original terminology,λ{\displaystyle \lambda }corresponds to a normally splitting finite-dimensional root subspace), if the following two conditions are satisfied:
That is, the restrictionA2{\displaystyle A_{2}}ofA{\displaystyle A}ontoNλ{\displaystyle {\mathfrak {N}}_{\lambda }}is an operator with domainD(A2)=Nλ∩D(A){\displaystyle {\mathfrak {D}}(A_{2})={\mathfrak {N}}_{\lambda }\cap {\mathfrak {D}}(A)}and with the rangeR(A2−λI)⊂Nλ{\displaystyle {\mathfrak {R}}(A_{2}-\lambda I)\subset {\mathfrak {N}}_{\lambda }}which has a bounded inverse.[1][2][3]
LetA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}be a closed lineardensely defined operatorin the Banach spaceB{\displaystyle {\mathfrak {B}}}. The following statements are equivalent[4](Theorem III.88):
Ifλ{\displaystyle \lambda }is a normal eigenvalue, then the root linealLλ(A){\displaystyle {\mathfrak {L}}_{\lambda }(A)}coincides with the range of the Riesz projector,R(Pλ){\displaystyle {\mathfrak {R}}(P_{\lambda })}.[3]
The above equivalence shows that the set of normal eigenvalues coincides with thediscrete spectrum, defined as the set of isolated points of the spectrum with finite rank of the corresponding Riesz projector.[5]
The spectrum of a closed operatorA:B→B{\displaystyle A:\,{\mathfrak {B}}\to {\mathfrak {B}}}in the Banach spaceB{\displaystyle {\mathfrak {B}}}can be decomposed into the union of two disjoint sets, the set of normal eigenvalues and the fifth type of theessential spectrum:
|
https://en.wikipedia.org/wiki/Normal_eigenvalue
|
Incomputer programming, specifically when using theimperative programmingparadigm, anassertionis apredicate(aBoolean-valued functionover thestate space, usually expressed as alogical propositionusing thevariablesof a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help acompilercompile it, or help the program detect its own defects.
For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberatelycrashesor throws an assertion failureexception.
The following code contains two assertions,x > 0andx > 1, and they are indeed true at the indicated points during execution:
Programmers can use assertions to help specify programs and to reason about program correctness. For example, aprecondition—an assertion placed at the beginning of a section of code—determines the set of states under which the programmer expects the code to execute. Apostcondition—placed at the end—describes the expected state at the end of execution. For example:x > 0 { x++ } x > 1.
The example above uses the notation for including assertions used byC. A. R. Hoarein his 1969 article.[1]That notation cannot be used in existing mainstream programming languages. However, programmers can include unchecked assertions using thecomment featureof their programming language. For example, inC++:
The braces included in the comment help distinguish this use of a comment from other uses.
Librariesmay provide assertion features as well. For example, in C usingglibcwith C99 support:
Several modern programming languages include checked assertions –statementsthat are checked atruntimeor sometimes statically. If an assertion evaluates to false at runtime, an assertion failure results, which typically causes execution to abort. This draws attention to the location at which the logical inconsistency is detected and can be preferable to the behaviour that would otherwise result.
The use of assertions helps the programmer design, develop, and reason about a program.
In languages such asEiffel, assertions form part of the design process; other languages, such asCandJava, use them only to check assumptions atruntime. In both cases, they can be checked for validity at runtime but can usually also be suppressed.
Assertions can function as a form of documentation: they can describe the state the code expects to find before it runs (itspreconditions), and the state the code expects to result in when it is finished running (postconditions); they can also specifyinvariantsof aclass.Eiffelintegrates such assertions into the language and automatically extracts them to document the class. This forms an important part of the method ofdesign by contract.
This approach is also useful in languages that do not explicitly support it: the advantage of using assertion statements rather than assertions incommentsis that the program can check the assertions every time it runs; if the assertion no longer holds, an error can be reported. This prevents the code from getting out of sync with the assertions.
An assertion may be used to verify that an assumption made by the programmer during the implementation of the program remains valid when the program is executed. For example, consider the followingJavacode:
InJava,%is theremainderoperator (modulo), and in Java, if its first operand is negative, the result can also be negative (unlike the modulo used in mathematics). Here, the programmer has assumed thattotalis non-negative, so that the remainder of a division with 2 will always be 0 or 1. The assertion makes this assumption explicit: ifcountNumberOfUsersdoes return a negative value, the program may have a bug.
A major advantage of this technique is that when an error does occur it is detected immediately and directly, rather than later through often obscure effects. Since an assertion failure usually reports the code location, one can often pin-point the error without further debugging.
Assertions are also sometimes placed at points the execution is not supposed to reach. For example, assertions could be placed at thedefaultclause of theswitchstatement in languages such asC,C++, andJava. Any case which the programmer does not handle intentionally will raise an error and the program will abort rather than silently continuing in an erroneous state. InDsuch an assertion is added automatically when aswitchstatement doesn't contain adefaultclause.
InJava, assertions have been a part of the language since version 1.4. Assertion failures result in raising anAssertionErrorwhen the program is run with the appropriate flags, without which the assert statements are ignored. InC, they are added on by the standard headerassert.hdefiningassert (assertion)as a macro that signals an error in the case of failure, usually terminating the program. InC++, bothassert.handcassertheaders provide theassertmacro.
The danger of assertions is that they may cause side effects either by changing memory data or by changing thread timing. Assertions should be implemented carefully so they cause no side effects on program code.
Assertion constructs in a language allow for easytest-driven development(TDD) without the use of a third-party library.
During thedevelopment cycle, the programmer will typically run the program with assertions enabled. When an assertion failure occurs, the programmer is immediately notified of the problem. Many assertion implementations will also halt the program's execution: this is useful, since if the program continued to run after an assertion violation occurred, it might corrupt its state and make the cause of the problem more difficult to locate. Using the information provided by the assertion failure (such as the location of the failure and perhaps astack trace, or even the full program state if the environment supportscore dumpsor if the program is running in adebugger), the programmer can usually fix the problem. Thus assertions provide a very powerful tool in debugging.
When a program is deployed toproduction, assertions are typically turned off, to avoid any overhead or side effects they may have. In some cases assertions are completely absent from deployed code, such as in C/C++ assertions via macros. In other cases, such as Java, assertions are present in the deployed code, and can be turned on in the field for debugging.[2]
Assertions may also be used to promise the compiler that a given edge condition is not actually reachable, thereby permitting certainoptimizationsthat would not otherwise be possible. In this case, disabling the assertions could actually reduce performance.
Assertions that are checked at compile time are called static assertions.
Static assertions are particularly useful in compile timetemplate metaprogramming, but can also be used in low-level languages like C by introducing illegal code if (and only if) the assertion fails.C11andC++11support static assertions directly throughstatic_assert. In earlier C versions, a static assertion can be implemented, for example, like this:
If the(BOOLEAN CONDITION)part evaluates to false then the above code will not compile because the compiler will not allow twocase labelswith the same constant. The boolean expression must be a compile-time constant value, for example(sizeof(int)==4)would be a valid expression in that context. This construct does not work at file scope (i.e. not inside a function), and so it must be wrapped inside a function.
Another popular[3]way of implementing assertions in C is:
If the(BOOLEAN CONDITION)part evaluates to false then the above code will not compile because arrays may not have a negative length. If in fact the compiler allows a negative length then the initialization byte (the'!'part) should cause even such over-lenient compilers to complain. The boolean expression must be a compile-time constant value, for example(sizeof(int) == 4)would be a valid expression in that context.
Both of these methods require a method of constructing unique names. Modern compilers support a__COUNTER__preprocessor define that facilitates the construction of unique names, by returning monotonically increasing numbers for each compilation unit.[4]
Dprovides static assertions through the use ofstatic assert.[5]
Most languages allow assertions to be enabled or disabled globally, and sometimes independently. Assertions are often enabled during development and disabled during final testing and on release to the customer. Not checking assertions avoids the cost of evaluating the assertions while (assuming the assertions are free ofside effects) still producing the same result under normal conditions. Under abnormal conditions, disabling assertion checking can mean that a program that would have aborted will continue to run. This is sometimes preferable.
Some languages, includingC,YASSandC++, can completely remove assertions at compile time using thepreprocessor.
Similarly, launching thePythoninterpreter with "-O" (for "optimize") as an argument will cause the Python code generator to not emit any bytecode for asserts.[6]
Java requires an option to be passed to the run-time engine in order toenableassertions. Absent the option, assertions are bypassed, but they always remain in the code unless optimised away by aJIT compilerat run-time orexcluded at compile timevia the programmer manually placing each assertion behind anif (false)clause.
Programmers can build checks into their code that are always active by bypassing or manipulating the language's normal assertion-checking mechanisms.
Assertions are distinct from routineerror-handling. Assertions document logically impossible situations and discover programming errors: if the impossible occurs, then something fundamental is clearly wrong with the program. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is unwise: assertions do not allow for recovery from errors; an assertion failure will normally halt the program's execution abruptly; and assertions are often disabled in production code. Assertions also do not display a user-friendly error message.
Consider the following example of using an assertion to handle an error:
Here, the programmer is aware thatmallocwill return aNULLpointerif memory is not allocated. This is possible: the operating system does not guarantee that every call tomallocwill succeed. If an out of memory error occurs the program will immediately abort. Without the assertion, the program would continue running untilptrwas dereferenced, and possibly longer, depending on the specific hardware being used. So long as assertions are not disabled, an immediate exit is assured. But if a graceful failure is desired, the program has to handle the failure. For example, a server may have multiple clients, or may hold resources that will not be released cleanly, or it may have uncommitted changes to write to a datastore. In such cases it is better to fail a single transaction than to abort abruptly.
Another error is to rely on side effects of expressions used as arguments of an assertion. One should always keep in mind that assertions might not be executed at all, since their sole purpose is to verify that a condition which should always be true does in fact hold true. Consequently, if the program is considered to be error-free and released, assertions may be disabled and will no longer be evaluated.
Consider another version of the previous example:
This might look like a smart way to assign the return value ofmalloctoptrand check if it isNULLin one step, but themalloccall and the assignment toptris a side effect of evaluating the expression that forms theassertcondition. When theNDEBUGparameter is passed to the compiler, as when the program is considered to be error-free and released, theassert()statement is removed, somalloc()isn't called, renderingptruninitialised. This could potentially result in asegmentation faultor similarnull pointererror much further down the line in program execution, causing bugs that may besporadicand/or difficult to track down. Programmers sometimes use a similar VERIFY(X) define to alleviate this problem.
Modern compilers may issue a warning when encountering the above code.[7]
In 1947 reports byvon NeumannandGoldstine[8]on their design for theIAS machine, they described algorithms using an early version offlow charts, in which they included assertions: "It may be true, that whenever C actually reaches a certain point in the flow diagram, one or more bound variables will necessarily possess certain specified values, or possess certain properties, or satisfy certain properties with each other. Furthermore, we may, at such a point, indicate the validity of these limitations. For this reason we will denote each area in which the validity of such limitations is being asserted, by a special box, which we call an assertion box."
The assertional method for proving correctness of programs was advocated byAlan Turing. In a talk "Checking a Large Routine" at Cambridge, June 24, 1949 Turing suggested: "How can one check a large routine in the sense of making sure that it's right? In order that the man who checks may not have too difficult a task, the programmer should make a number of definiteassertionswhich can be checked individually, and from which the correctness of the whole program easily follows".[9]
|
https://en.wikipedia.org/wiki/Assertion_(computing)
|
Theinformation explosionis the rapid increase in the amount ofpublishedinformationordataand the effects of this abundance.[1]As the amount of available data grows, the problem ofmanaging the informationbecomes more difficult, which can lead toinformation overload. The Online Oxford English Dictionary indicates use of the phrase in a March 1964New Statesmanarticle.[2]The New York Timesfirst used the phrase in its editorial content in an article by Walter Sullivan on June 7, 1964, in which he described the phrase as "much discussed". (p11.)[3]The earliest known use of the phrase was in a speech about television by NBC presidentPat Weaverat the Institute of Practitioners of Advertising in London on September 27, 1955. The speech was rebroadcast on radio stationWSUIinIowa Cityand excerpted in theDaily Iowannewspaper two months later.[4]
Many sectors are seeing this rapid increase in the amount of information available such as healthcare, supermarkets, and governments.[5]Another sector that is being affected by this phenomenon is journalism. Such a profession, which in the past was responsible for the dissemination of information, may be suppressed by the overabundance of information today.[6]
Techniques to gather knowledge from an overabundance of electronic information (e.g.,data fusionmay help indata mining) have existed since the 1970s. Another common technique to deal with such amount of information isqualitative research.[7]Such approaches aim to organize the information, synthesizing, categorizing and systematizing in order to be more usable and easier to search.
A new metric that is being used in an attempt to characterize the growth in person-specific information, is the disk storage per person (DSP), which is measured in megabytes/person (wheremegabytesis 106bytesand is abbreviated MB). Global DSP (GDSP) is the total rigid disk drive space (in MB) of new units sold in a year divided by theworld populationin that year. The GDSP metric is a crude measure of how much disk storage could possibly be used to collect person-specific data on the world population.[5]In 1983, one million fixed drives with an estimated total of 90terabyteswere sold worldwide; 30MB drives had the largest market segment.[9]In 1996, 105 million drives, totaling 160,623 terabytes were sold with 1 and 2gigabytedrives leading the industry.[10]By the year 2000, with 20GB drive leading the industry, rigid drives sold for the year are projected to total 2,829,288 terabytes Rigid disk drive sales to top $34 billion in 1997.
According toLatanya Sweeney, there are three trends in data gathering today:
Type 1.Expansion of the number of fields being collected, known as the “collect more” trend.
Type 2.Replace an existing aggregate data collection with a person-specific one, known as the “collect specifically” trend.
Type 3.Gather information by starting a new person-specific data collection, known as the “collect it if you can” trend.[5]
Since "information" in electronic media is often used synonymously with "data", the terminformation explosionis closely related to the concept ofdata flood(also dubbeddata deluge). Sometimes the terminformation floodis used as well. All of those basically boil down to the ever-increasing amount ofelectronic dataexchanged per time unit. A term that covers the potential negative effects of information explosion isinformation inflation.[11]The awareness about non-manageable amounts of data grew along with the advent of ever more powerful data processing since the mid-1960s.[12]
Even though the abundance of information can be beneficial in several levels, some problems may be of concern such asprivacy, legal and ethical guidelines, filtering and data accuracy.[13]Filtering refers to finding useful information in the middle of so much data, which relates to the job of data scientists. A typical example of a necessity of data filtering (data mining) is in healthcare since in the next years is due to have EHRs (Electronic Health Records) of patients available. With so much information available, the doctors will need to be able to identify patterns and select important data for the diagnosis of the patient.[13]On the other hand, according to some experts, having so much public data available makes it difficult to provide data that is actually anonymous.[5]Another point to take into account is the legal and ethical guidelines, which relates to who will be the owner of the data and how frequently he/she is obliged to the release this and for how long.[13]With so many sources of data, another problem will be accuracy of such. An untrusted source may be challenged by others, by ordering a new set of data, causing a repetition in the information.[13]According to Edward Huth, another concern is the accessibility and cost of such information.[14]The accessibility rate could be improved by either reducing the costs or increasing the utility of the information. The reduction of costs according to the author, could be done by associations, which should assess which information was relevant and gather it in a more organized fashion.
As of August 2005, there were over 70 millionweb servers.[15]As of September 2007[update]there were over 135 million web servers.[16]
According toTechnorati, the number ofblogsdoubles about every 6 months with a total of 35.3 million blogs as of April 2006[ref].[17]This is an example of the early stages oflogistic growth, where growth is approximatelyexponential, since blogs are a recent innovation. As the number of blogs approaches the number of possible producers (humans), saturation occurs, growth declines, and the number of blogs eventually stabilizes.
|
https://en.wikipedia.org/wiki/Information_explosion
|
Incomputer science, analgorithmis said to beasymptotically optimalif, roughly speaking, for large inputs it performsat worsta constant factor (independent of the input size) worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use ofbig-O notation.
More formally, an algorithm is asymptotically optimal with respect to a particularresourceif the problem has been proven to requireΩ(f(n))of that resource, and the algorithm has been proven to use onlyO(f(n)).
These proofs require an assumption of a particularmodel of computation, i.e., certain restrictions on operations allowable with the input data.
As a simple example, it's known that allcomparison sortsrequire at leastΩ(nlogn)comparisons in the average and worst cases.Mergesortandheapsortare comparison sorts which performO(nlogn)comparisons, so they are asymptotically optimal in this sense.
If the input data have somea prioriproperties which can be exploited in construction of algorithms, in addition to comparisons, then asymptotically faster algorithms may be possible. For example, if it is known that theNobjects areintegersfrom the range[1,N],then they may be sortedO(N)time, e.g., by thebucket sort.
A consequence of an algorithm being asymptotically optimal is that, for large enough inputs, no algorithm can outperform it by more than a constant factor. For this reason, asymptotically optimal algorithms are often seen as the "end of the line" in research, the attaining of a result that cannot be dramatically improved upon. Conversely, if an algorithm is not asymptotically optimal, this implies that as the input grows in size, the algorithm performs increasingly worse than the best possible algorithm.
In practice it's useful to find algorithms that perform better, even if they do not enjoy any asymptotic advantage. New algorithms may also present advantages such as better performance on specific inputs, decreased use of resources, or being simpler to describe and implement. Thus asymptotically optimal algorithms are not always the "end of the line".
Although asymptotically optimal algorithms are important theoretical results, an asymptotically optimal algorithm might not be used in a number of practical situations:
An example of an asymptotically optimal algorithm not used in practice isBernard Chazelle'slinear-timealgorithm fortriangulationof asimple polygon. Another is theresizable arraydata structure published in "Resizable Arrays in Optimal Time and Space",[1]which can index in constant time but on many machines carries a heavy practical penalty compared to ordinary array indexing.
Formally, suppose that we have a lower-bound theorem showing that a problem requires Ω(f(n)) time to solve for an instance (input) of sizen(seeBig O notation § Big Omega notationfor the definition of Ω). Then, an algorithm which solves the problem in O(f(n)) time is said to be asymptotically optimal. This can also be expressed using limits: suppose that b(n) is a lower bound on the running time, and a given algorithm takes time t(n). Then the algorithm is asymptotically optimal if:
This limit, if it exists, is always at least 1, as t(n) ≥ b(n).
Although usually applied to time efficiency, an algorithm can be said to use asymptotically optimal space, random bits, number of processors, or any other resource commonly measured using big-O notation.
Sometimes vague or implicit assumptions can make it unclear whether an algorithm is asymptotically optimal. For example, a lower bound theorem might assume a particularabstract machinemodel, as in the case of comparison sorts, or a particular organization of memory. By violating these assumptions, a new algorithm could potentially asymptotically outperform the lower bound and the "asymptotically optimal" algorithms.
The nonexistence of an asymptotically optimal algorithm is called speedup.Blum's speedup theoremshows that there exist artificially constructed problems with speedup. However, it is anopen problemwhether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is anO(nα(n)){\displaystyle O(n\alpha (n))}algorithm for findingminimum spanning trees, whereα(n){\displaystyle \alpha (n)}is the very slowly growing inverse of theAckermann function, but the best known lower bound is the trivialΩ(n){\displaystyle \Omega (n)}. Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way. Coppersmith and Winograd (1982) proved that matrix multiplication has a weak form of speed-up among a restricted class of algorithms (Strassen-type bilinear identities with lambda-computation).
|
https://en.wikipedia.org/wiki/Asymptotically_optimal_algorithm
|
Incombinatorialmathematics, aLangford pairing, also called aLangford sequence, is apermutationof the sequence of 2nnumbers 1, 1, 2, 2, ...,n,nin which the two 1s are one unit apart, the two 2s are two units apart, and more generally the two copies of each numberkarekunits apart. Langford pairings are named after C. Dudley Langford, who posed the problem of constructing them in 1958.
Langford's problemis the task of finding Langford pairings for a given value ofn.[1]
The closely related concept of aSkolem sequence[2]is defined in the same way, but instead permutes the sequence 0, 0, 1, 1, ...,n− 1,n− 1.
A Langford pairing forn= 3 is given by the sequence 2, 3, 1, 2, 1, 3.
Langford pairings exist only whenniscongruentto 0 or 3 modulo 4; for instance, there is no Langford pairing whenn= 1, 2, or 5.
The numbers of different Langford pairings forn= 1, 2, …, counting any sequence as being the same as its reversal, are
AsKnuth (2008)describes, the problem of listing all Langford pairings for a givenncan be solved as an instance of theexact cover problem, but for largenthe number of solutions can be calculated more efficiently by algebraic methods.
Skolem (1957)used Skolem sequences to constructSteiner triple systems.
In the 1960s, E. J. Groth used Langford pairings to construct circuits for integermultiplication.[3]
|
https://en.wikipedia.org/wiki/Langford_pairing
|
Acyclic redundancy check(CRC) is anerror-detecting codecommonly used in digitalnetworksand storage devices to detect accidental changes to digital data. Blocks of data entering these systems get a shortcheck valueattached, based on the remainder of apolynomial divisionof their contents. On retrieval, the calculation is repeated and, in the event the check values do not match, corrective action can be taken against data corruption. CRCs can be used forerror correction(seebitfilters).[1]
CRCs are so called because thecheck(data verification) value is aredundancy(it expands the message without addinginformation) and thealgorithmis based oncycliccodes. CRCs are popular because they are simple to implement in binaryhardware, easy to analyze mathematically, and particularly good at detecting common errors caused bynoisein transmission channels. Because the check value has a fixed length, thefunctionthat generates it is occasionally used as ahash function.
CRCs are based on the theory ofcyclicerror-correcting codes. The use ofsystematiccyclic codes, which encode messages by adding a fixed-length check value, for the purpose of error detection in communication networks, was first proposed byW. Wesley Petersonin 1961.[2]Cyclic codes are not only simple to implement but have the benefit of being particularly well suited for the detection ofburst errors: contiguous sequences of erroneous data symbols in messages. This is important because burst errors are common transmission errors in manycommunication channels, including magnetic and optical storage devices. Typically ann-bit CRC applied to a data block of arbitrary length will detect any single error burst not longer thannbits, and the fraction of all longer error bursts that it will detect is approximately(1 − 2−n).
Specification of a CRC code requires definition of a so-calledgenerator polynomial. This polynomial becomes thedivisorin apolynomial long division, which takes the message as thedividendand in which thequotientis discarded and theremainderbecomes the result. The important caveat is that the polynomialcoefficientsare calculated according to the arithmetic of afinite field, so the addition operation can always be performed bitwise-parallel (there is no carry between digits).
In practice, all commonly used CRCs employ the finite field of two elements,GF(2). The two elements are usually called 0 and 1, comfortably matching computer architecture.
A CRC is called ann-bit CRC when its check value isnbits long. For a givenn, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degreen, which means it hasn+ 1terms. In other words, the polynomial has a length ofn+ 1; its encoding requiresn+ 1bits. Note that most polynomial specifications either drop theMSborLSb, since they are always 1. The CRC and associated polynomial typically have a name of the form CRC-n-XXX as in thetablebelow.
The simplest error-detection system, theparity bit, is in fact a 1-bit CRC: it uses the generator polynomialx+ 1(two terms),[3]and has the name CRC-1.
A CRC-enabled device calculates a short, fixed-length binary sequence, known as thecheck valueorCRC, for each block of data to be sent or stored and appends it to the data, forming acodeword.
When a codeword is received or read, the device either compares its check value with one freshly calculated from the data block, or equivalently, performs a CRC on the whole codeword and compares the resulting check value with an expectedresidueconstant.
If the CRC values do not match, then the block contains a data error.
The device may take corrective action, such as rereading the block or requesting that it be sent again. Otherwise, the data is assumed to be error-free (though, with some small probability, it may contain undetected errors; this is inherent in the nature of error-checking).[4]
CRCs are specifically designed to protect against common types of errors on communication channels, where they can provide quick and reasonable assurance of theintegrityof messages delivered. However, they are not suitable for protecting against intentional alteration of data.
Firstly, as there is no authentication, an attacker can edit a message and recompute the CRC without the substitution being detected. When stored alongside the data, CRCs and cryptographic hash functions by themselves do not protect againstintentionalmodification of data. Any application that requires protection against such attacks must use cryptographic authentication mechanisms, such asmessage authentication codesordigital signatures(which are commonly based oncryptographic hashfunctions).
Secondly, unlike cryptographic hash functions, CRC is an easily reversible function, which makes it unsuitable for use in digital signatures.[5]
Thirdly, CRC satisfies a relation similar to that of alinear function(or more accurately, anaffine function):[6]
wherec{\displaystyle c}depends on the length ofx{\displaystyle x}andy{\displaystyle y}. This can be also stated as follows, wherex{\displaystyle x},y{\displaystyle y}andz{\displaystyle z}have the same length
as a result, even if the CRC is encrypted with astream cipherthat usesXORas its combining operation (ormodeofblock cipherwhich effectively turns it into a stream cipher, such as OFB or CFB), both the message and the associated CRC can be manipulated without knowledge of the encryption key; this was one of the well-known design flaws of theWired Equivalent Privacy(WEP) protocol.[7]
To compute ann-bit binary CRC, line the bits representing the input in a row, and position the (n+ 1)-bit pattern representing the CRC's divisor (called a "polynomial") underneath the left end of the row.
In this example, we shall encode 14 bits of message with a 3-bit CRC, with a polynomialx3+x+ 1. The polynomial is written in binary as the coefficients; a 3rd-degree polynomial has 4 coefficients (1x3+ 0x2+ 1x+ 1). In this case, the coefficients are 1, 0, 1 and 1. The result of the calculation is 3 bits long, which is why it is called a 3-bit CRC. However, you need 4 bits to explicitly state the polynomial.
Start with the message to be encoded:
This is first padded with zeros corresponding to the bit lengthnof the CRC. This is done so that the resulting code word is insystematicform. Here is the first calculation for computing a 3-bit CRC:
The algorithm acts on the bits directly above the divisor in each step. The result for that iteration is the bitwise XOR of the polynomial divisor with the bits above it. The bits not above the divisor are simply copied directly below for that step. The divisor is then shifted right to align with the highest remaining 1 bit in the input, and the process is repeated until the divisor reaches the right-hand end of the input row. Here is the entire calculation:
Since the leftmost divisor bit zeroed every input bit it touched, when this process ends the only bits in the input row that can be nonzero are the n bits at the right-hand end of the row. Thesenbits are the remainder of the division step, and will also be the value of the CRC function (unless the chosen CRC specification calls for some postprocessing).
The validity of a received message can easily be verified by performing the above calculation again, this time with the check value added instead of zeroes. The remainder should equal zero if there are no detectable errors.
The followingPythoncode outlines a function which will return the initial CRC remainder for a chosen input and polynomial, with either 1 or 0 as the initial padding. Note that this code works with string inputs rather than raw numbers:
Mathematical analysis of this division-like process reveals how to select a divisor that guarantees good error-detection properties. In this analysis, the digits of the bit strings are taken as the coefficients of a polynomial in some variablex—coefficients that are elements of the finite fieldGF(2)(the integers modulo 2, i.e. either a zero or a one), instead of more familiar numbers. The set of binary polynomials is a mathematicalring.
The selection of the generator polynomial is the most important part of implementing the CRC algorithm. The polynomial must be chosen to maximize the error-detecting capabilities while minimizing overall collision probabilities.
The most important attribute of the polynomial is its length (largest degree(exponent) +1 of any one term in the polynomial), because of its direct influence on the length of the computed check value.
The most commonly used polynomial lengths are 9 bits (CRC-8), 17 bits (CRC-16), 33 bits (CRC-32), and 65 bits (CRC-64).[3]
A CRC is called ann-bit CRC when its check value isn-bits. For a givenn, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degreen, and hencen+ 1terms (the polynomial has a length ofn+ 1). The remainder has lengthn. The CRC has a name of the form CRC-n-XXX.
The design of the CRC polynomial depends on the maximum total length of the block to be protected (data + CRC bits), the desired error protection features, and the type of resources for implementing the CRC, as well as the desired performance. A common misconception is that the "best" CRC polynomials are derived from eitherirreducible polynomialsor irreducible polynomials times the factor1 +x, which adds to the code the ability to detect all errors affecting an odd number of bits.[8]In reality, all the factors described above should enter into the selection of the polynomial and may lead to a reducible polynomial. However, choosing a reducible polynomial will result in a certain proportion of missed errors, due to the quotient ring havingzero divisors.
The advantage of choosing aprimitive polynomialas the generator for a CRC code is that the resulting code has maximal total block length in the sense that all 1-bit errors within that block length have different remainders (also calledsyndromes) and therefore, since the remainder is a linear function of the block, the code can detect all 2-bit errors within that block length. Ifr{\displaystyle r}is the degree of the primitive generator polynomial, then the maximal total block length is2r−1{\displaystyle 2^{r}-1}, and the associated code is able to detect any single-bit or double-bit errors.[9]However, if we use the generator polynomialg(x)=p(x)(1+x){\displaystyle g(x)=p(x)(1+x)}, wherep{\displaystyle p}is a primitive polynomial of degreer−1{\displaystyle r-1}, then the maximal total block length is2r−1−1{\displaystyle 2^{r-1}-1}, and the code is able to detect single, double, triple and any odd number of errors.
A polynomialg(x){\displaystyle g(x)}that admits other factorizations may be chosen then so as to balance the maximal total blocklength with a desired error detection power. TheBCH codesare a powerful class of such polynomials. They subsume the two examples above. Regardless of the reducibility properties of a generator polynomial of degreer, if it includes the "+1" term, the code will be able to detect error patterns that are confined to a window ofrcontiguous bits. These patterns are called "error bursts".
The concept of the CRC as an error-detecting code gets complicated when an implementer or standards committee uses it to design a practical system. Here are some of the complications:
These complications mean that there are three common ways to express a polynomial as an integer: the first two, which are mirror images in binary, are the constants found in code; the third is the number found in Koopman's papers.In each case, one term is omitted.So the polynomialx4+x+1{\displaystyle x^{4}+x+1}may be transcribed as:
In the table below they are shown as:
CRCs inproprietary protocolsmight beobfuscatedby using a non-trivial initial value and a final XOR, but these techniques do not add cryptographic strength to the algorithm and can bereverse engineeredusing straightforward methods.[10]
Numerous varieties of cyclic redundancy checks have been incorporated intotechnical standards. By no means does one algorithm, or one of each degree, suit every purpose; Koopman and Chakravarty recommend selecting a polynomial according to the application requirements and the expected distribution of message lengths.[11]The number of distinct CRCs in use has confused developers, a situation which authors have sought to address.[8]There are three polynomials reported for CRC-12,[11]twenty-two conflicting definitions of CRC-16, and seven of CRC-32.[12]
The polynomials commonly applied are not the most efficient ones possible. Since 1993, Koopman, Castagnoli and others have surveyed the space of polynomials between 3 and 64 bits in size,[11][13][14][15]finding examples that have much better performance (in terms ofHamming distancefor a given message size) than the polynomials of earlier protocols, and publishing the best of these with the aim of improving the error detection capacity of future standards.[14]In particular,iSCSIandSCTPhave adopted one of the findings of this research, the CRC-32C (Castagnoli) polynomial.
The design of the 32-bit polynomial most commonly used by standards bodies, CRC-32-IEEE, was the result of a joint effort for theRome Laboratoryand the Air Force Electronic Systems Division by Joseph Hammond, James Brown and Shyan-Shiang Liu of theGeorgia Institute of Technologyand Kenneth Brayer of theMitre Corporation. The earliest known appearances of the 32-bit polynomial were in their 1975 publications: Technical Report 2956 by Brayer for Mitre, published in January and released for public dissemination throughDTICin August,[16]and Hammond, Brown and Liu's report for the Rome Laboratory, published in May.[17]Both reports contained contributions from the other team. During December 1975, Brayer and Hammond presented their work in a paper at the IEEE National Telecommunications Conference: the IEEE CRC-32 polynomial is the generating polynomial of aHamming codeand was selected for its error detection performance.[18]Even so, the Castagnoli CRC-32C polynomial used in iSCSI or SCTP matches its performance on messages from 58 bits to 131 kbits, and outperforms it in several size ranges including the two most common sizes of Internet packet.[14]TheITU-TG.hnstandard also uses CRC-32C to detect errors in the payload (although it uses CRC-16-CCITT forPHY headers).
CRC-32C computation is implemented in hardware as an operation (CRC32) ofSSE4.2instruction set, first introduced inIntelprocessors'Nehalemmicroarchitecture.ARMAArch64architecture also provides hardware acceleration for both CRC-32 and CRC-32C operations.
The table below lists only the polynomials of the various algorithms in use. Variations of a particular protocol can impose pre-inversion, post-inversion and reversed bit ordering as described above. For example, the CRC32 used in Gzip and Bzip2 use the same polynomial, but Gzip employs reversed bit ordering, while Bzip2 does not.[12]Note that even parity polynomials inGF(2)with degree greater than 1 are never primitive. Even parity polynomial marked as primitive in this table represent a primitive polynomial multiplied by(x+1){\displaystyle \left(x+1\right)}. The most significant bit of a polynomial is always 1, and is not shown in the hex representations.
|
https://en.wikipedia.org/wiki/Cyclic_redundancy_check
|
Anopen service interface definition(OSID) is a programmatic interface specification describing a service. These interfaces are specified by theOpen Knowledge Initiative(OKI) to implement aservice-oriented architecture(SOA) to achieveinteroperabilityamong applications across a varied base of underlying and changing technologies.
To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer fromprotocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments.
OSIDs assist insoftware designand development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider andbelowthe interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplifiedproject management.
OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achievesreusabilityat a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes.
An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means ofabstraction. When all the OSID providers implement the same service, this is called anadapterpattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application.
This Internet-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Filing_Open_Service_Interface_Definition
|
Inmathematics, theSelberg trace formula, introduced bySelberg (1956), is an expression for the character of theunitary representationof aLie groupGon the spaceL2(Γ\G)ofsquare-integrable functions, whereΓis a cofinitediscrete group. The character is given by the trace of certain functions onG.
The simplest case is whenΓiscocompact, when the representation breaks up into discrete summands. Here the trace formula is an extension of theFrobenius formulafor the character of aninduced representationof finite groups. WhenΓis the cocompact subgroupZof the real numbersG=R, the Selberg trace formula is essentially thePoisson summation formula.
The case whenΓ\Gis not compact is harder, because there is acontinuous spectrum, described usingEisenstein series. Selberg worked out the non-compact case whenGis the groupSL(2,R); the extension to higher rank groups is theArthur–Selberg trace formula.
WhenΓis the fundamental group of aRiemann surface, the Selberg trace formula describes the spectrum of differential operators such as theLaplacianin terms of geometric data involving the lengths of geodesics on the Riemann surface. In this case the Selberg trace formula is formally similar to theexplicit formulasrelating the zeros of theRiemann zeta functionto prime numbers, with the zeta zeros corresponding to eigenvalues of the Laplacian, and the primes corresponding to geodesics. Motivated by the analogy, Selberg introduced theSelberg zeta functionof a Riemann surface, whose analytic properties are encoded by the Selberg trace formula.
Cases of particular interest include those for which the space is acompact Riemann surfaceS. The initial publication in 1956 ofAtle Selbergdealt with this case, itsLaplaciandifferential operator and its powers. The traces of powers of a Laplacian can be used to define theSelberg zeta function. The interest of this case was the analogy between the formula obtained, and theexplicit formulaeofprime numbertheory. Here theclosed geodesicsonSplay the role of prime numbers.
At the same time, interest in the traces ofHecke operatorswas linked to theEichler–Selberg trace formula, of Selberg andMartin Eichler, for a Hecke operator acting on a vector space ofcusp formsof a given weight, for a givencongruence subgroupof themodular group. Here the trace of the identity operator is the dimension of the vector space, i.e. the dimension of the space of modular forms of a given type: a quantity traditionally calculated by means of theRiemann–Roch theorem.
The trace formula has applications toarithmetic geometryandnumber theory. For instance, using the trace theorem,Eichler and Shimuracalculated theHasse–Weil L-functionsassociated tomodular curves;Goro Shimura's methods by-passed the analysis involved in the trace formula. The development ofparabolic cohomology(fromEichler cohomology) provided a purely algebraic setting based ongroup cohomology, taking account of thecuspscharacteristic of non-compact Riemann surfaces and modular curves.
The trace formula also has purelydifferential-geometricapplications. For instance, by a result of Buser, thelength spectrumof aRiemann surfaceis an isospectral invariant, essentially by the trace formula.
A compact hyperbolic surfaceXcan be written as the space of orbitsΓ∖H,{\displaystyle \Gamma \backslash \mathbf {H} ,}whereΓis a subgroup ofPSL(2,R), andHis theupper half plane, andΓacts onHbylinear fractional transformations.
The Selberg trace formula for this case is easier than the general case because the surface is compact so there is no continuous spectrum, and the groupΓhas no parabolic or elliptic elements (other than the identity).
Then the spectrum for theLaplace–Beltrami operatoronXis discrete and real, since the Laplace operator is self adjoint with compactresolvent; that is0=μ0<μ1≤μ2≤⋯{\displaystyle 0=\mu _{0}<\mu _{1}\leq \mu _{2}\leq \cdots }where the eigenvaluesμncorrespond toΓ-invariant eigenfunctionsuinC∞(H)of the Laplacian; in other words{u(γz)=u(z),∀γ∈Γy2(uxx+uyy)+μnu=0.{\displaystyle {\begin{cases}u(\gamma z)=u(z),\qquad \forall \gamma \in \Gamma \\y^{2}\left(u_{xx}+u_{yy}\right)+\mu _{n}u=0.\end{cases}}}
Using the variable substitutionμ=s(1−s),s=12+ir{\displaystyle \mu =s(1-s),\qquad s={\tfrac {1}{2}}+ir}the eigenvalues are labeledrn,n≥0.{\displaystyle r_{n},n\geq 0.}
Then theSelberg trace formulais given by∑n=0∞h(rn)=μ(X)4π∫−∞∞rh(r)tanh(πr)dr+∑{T}logN(T0)N(T)12−N(T)−12g(logN(T)).{\displaystyle \sum _{n=0}^{\infty }h(r_{n})={\frac {\mu (X)}{4\pi }}\int _{-\infty }^{\infty }r\,h(r)\tanh(\pi r)\,dr+\sum _{\{T\}}{\frac {\log N(T_{0})}{N(T)^{\frac {1}{2}}-N(T)^{-{\frac {1}{2}}}}}g(\log N(T)).}
The right hand side is a sum over conjugacy classes of the groupΓ, with the first term corresponding to the identity element and the remaining terms forming a sum over the other conjugacy classes{T}(which are all hyperbolic in this case). The functionhhas to satisfy the following:
The functiongis the Fourier transform ofh, that is,h(r)=∫−∞∞g(u)eirudu.{\displaystyle h(r)=\int _{-\infty }^{\infty }g(u)e^{iru}\,du.}
LetGbe a unimodular locally compact group, andΓ{\displaystyle \Gamma }a discrete cocompact subgroup ofGandϕ{\displaystyle \phi }a compactly supported continuous function onG. The trace formula in this setting is the following equality:∑γ∈{Γ}aΓG(γ)∫Gγ∖Gϕ(x−1γx)dx=∑π∈G^aΓG(π)trπ(ϕ){\displaystyle \sum _{\gamma \in \{\Gamma \}}a_{\Gamma }^{G}(\gamma )\int _{G^{\gamma }\setminus G}\phi (x^{-1}\gamma x)\,dx=\sum _{\pi \in {\widehat {G}}}a_{\Gamma }^{G}(\pi )\operatorname {tr} \pi (\phi )}where{Γ}{\displaystyle \{\Gamma \}}is the set of conjugacy classes inΓ{\displaystyle \Gamma },G^{\displaystyle {\widehat {G}}}is theunitary dualofGand:
The left-hand side of the formula is called thegeometric sideand the right-hand side thespectral side. The terms∫Gγ∖Gϕ(x−1γx)dx{\displaystyle \int _{G^{\gamma }\setminus G}\phi (x^{-1}\gamma x)\,dx}areorbital integrals.
Define the following operator on compactly supported functions onΓ∖G{\displaystyle \Gamma \backslash G}:R(ϕ)=∫Gϕ(x)R(x)dx,{\displaystyle R(\phi )=\int _{G}\phi (x)R(x)\,dx,}It extends continuously toL2(Γ∖G){\displaystyle L^{2}(\Gamma \setminus G)}and forf∈L2(Γ∖G){\displaystyle f\in L^{2}(\Gamma \setminus G)}we have:(R(ϕ)f)(x)=∫Gϕ(y)f(xy)dy=∫Γ∖G(∑γ∈Γϕ(x−1γy))f(y)dy{\displaystyle (R(\phi )f)(x)=\int _{G}\phi (y)f(xy)\,dy=\int _{\Gamma \setminus G}\left(\sum _{\gamma \in \Gamma }\phi (x^{-1}\gamma y)\right)f(y)\,dy}after a change of variables. AssumingΓ∖G{\displaystyle \Gamma \setminus G}is compact, the operatorR(ϕ){\displaystyle R(\phi )}istrace-classand the trace formula is the result of computing its trace in two ways as explained below.[1]
The trace ofR(ϕ){\displaystyle R(\phi )}can be expressed as the integral of the kernelK(x,y)=∑γ∈Γϕ(x−1γy){\displaystyle K(x,y)=\sum _{\gamma \in \Gamma }\phi (x^{-1}\gamma y)}along the diagonal, that is:trR(ϕ)=∫Γ∖G∑γ∈Γϕ(x−1γx)dx.{\displaystyle \operatorname {tr} R(\phi )=\int _{\Gamma \setminus G}\sum _{\gamma \in \Gamma }\phi (x^{-1}\gamma x)\,dx.}Let{Γ}{\displaystyle \{\Gamma \}}denote a collection of representatives of conjugacy classes inΓ{\displaystyle \Gamma }, andΓγ{\displaystyle \Gamma ^{\gamma }}andGγ{\displaystyle G^{\gamma }}the respective centralizers ofγ{\displaystyle \gamma }.
Then the above integral can, after manipulation, be writtentrR(ϕ)=∑γ∈{Γ}aΓG(γ)∫Gγ∖Gϕ(x−1γx)dx.{\displaystyle \operatorname {tr} R(\phi )=\sum _{\gamma \in \{\Gamma \}}a_{\Gamma }^{G}(\gamma )\int _{G^{\gamma }\setminus G}\phi (x^{-1}\gamma x)\,dx.}This gives thegeometric sideof the trace formula.
Thespectral sideof the trace formula comes from computing the trace ofR(ϕ){\displaystyle R(\phi )}using the decomposition of the regular representation ofG{\displaystyle G}into its irreducible components. ThustrR(ϕ)=∑π∈G^aΓG(π)trπ(ϕ){\displaystyle \operatorname {tr} R(\phi )=\sum _{\pi \in {\hat {G}}}a_{\Gamma }^{G}(\pi )\operatorname {tr} \pi (\phi )}whereG^{\displaystyle {\hat {G}}}is the set of irreducible unitary representations ofG{\displaystyle G}(recall that the positive integeraΓG(π){\displaystyle a_{\Gamma }^{G}(\pi )}is the multiplicity ofπ{\displaystyle \pi }in the unitary representationR{\displaystyle R}onL2(Γ∖G){\displaystyle L^{2}(\Gamma \setminus G)}).
WhenG{\displaystyle G}is a semisimple Lie group with a maximal compact subgroupK{\displaystyle K}andX=G/K{\displaystyle X=G/K}is the associatedsymmetric spacethe conjugacy classes inΓ{\displaystyle \Gamma }can be described in geometric terms using the compact Riemannian manifold (more generally orbifold)Γ∖X{\displaystyle \Gamma \backslash X}. The orbital integrals and the traces in irreducible summands can then be computed further and in particular one can recover the case of the trace formula for hyperbolic surfaces in this way.
The general theory ofEisenstein serieswas largely motivated by the requirement to separate out thecontinuous spectrum, which is characteristic of the non-compact case.[2]
The trace formula is often given for algebraic groups over the adeles rather than for Lie groups, because this makes the corresponding discrete subgroupΓinto an algebraic group over a field which is technically easier to work with. The case of SL2(C) is discussed inGel'fand, Graev & Pyatetskii-Shapiro (1990)andElstrodt, Grunewald & Mennicke (1998). Gel'fand et al also treat SL2(F) whereFis a locally compact topological field withultrametric norm, so a finite extension of thep-adic numbersQpor of theformal Laurent seriesFq((T)); they also handle the adelic case in characteristic 0, combining all completionsRandQpof therational numbersQ.
Contemporary successors of the theory are theArthur–Selberg trace formulaapplying to the case of general semisimpleG, and the many studies of the trace formula in theLanglands philosophy(dealing with technical issues such asendoscopy). The Selberg trace formula can be derived from the Arthur–Selberg trace formula with some effort.
|
https://en.wikipedia.org/wiki/Selberg_trace_formula
|
Thex86instruction setrefers to the set of instructions thatx86-compatiblemicroprocessorssupport. The instructions are usually part of anexecutableprogram, often stored as acomputer fileand executed on the processor.
The x86 instruction set has been extended several times, introducing widerregistersand datatypes as well as new functionality.[1]
Below is the full8086/8088instruction set of Intel (81 instructions total).[2]These instructions are also available in 32-bit mode, in which they operate on 32-bit registers (eax,ebx, etc.) and values instead of their 16-bit (ax,bx, etc.) counterparts. The updated instruction set is grouped according to architecture (i186,i286,i386,i486,i586/i686) and is referred to as (32-bit)x86and (64-bit)x86-64(also known asAMD64).
This is the original instruction set. In the 'Notes' column,rmeansregister,mmeansmemory addressandimmmeansimmediate(i.e. a value).
Note that since the lower half is the same for unsigned and signed multiplication, this version of the instruction can be used for unsigned multiplication as well.
The new instructions added in 80286 add support for x86protected mode. Some but not all of the instructions are available inreal modeas well.
The TSS (Task State Segment) specified by the 16-bit argument is marked busy, but a task switch is not done.
The 80386 added support for 32-bit operation to the x86 instruction set. This was done by widening the general-purpose registers to 32 bits and introducing the concepts ofOperandSizeandAddressSize– most instruction forms that would previously take 16-bit data arguments were given the ability to take 32-bit arguments by setting their OperandSize to 32 bits, and instructions that could take 16-bit address arguments were given the ability to take 32-bit address arguments by setting their AddressSize to 32 bits. (Instruction forms that work on 8-bit data continue to be 8-bit regardless of OperandSize. Using a data size of 16 bits will cause only the bottom 16 bits of the 32-bit general-purpose registers to be modified – the top 16 bits are left unchanged.)
The default OperandSize and AddressSize to use for each instruction is given by the D bit of thesegment descriptorof the current code segment -D=0makes both 16-bit,D=1makes both 32-bit. Additionally, they can be overridden on a per-instruction basis with two new instruction prefixes that were introduced in the 80386:
The 80386 also introduced the two new segment registersFSandGSas well as the x86control,debugandtest registers.
The new instructions introduced in the 80386 can broadly be subdivided into two classes:
For instruction forms where the operand size can be inferred from the instruction's arguments (e.g.ADD EAX,EBXcan be inferred to have a 32-bit OperandSize due to its use of EAX as an argument), new instruction mnemonics are not needed and not provided.
Mainly used to prepare a dividend for the 32-bitIDIV(signed divide) instruction.
Instruction is serializing.
Second operand specifies which bit of the first operand to test. The bit to test is copied toEFLAGS.CF.
Second operand specifies which bit of the first operand to test and set.
Second operand specifies which bit of the first operand to test and clear.
Second operand specifies which bit of the first operand to test and toggle.
Differs from older variants of conditional jumps in that they accept a 16/32-bit offset rather than just an 8-bit offset.
Offset part is stored in destination register argument, segment part in FS/GS/SS segment register as indicated by the instruction mnemonic.[i]
Moves to theCR3control register are serializing and will flush theTLB.[l]
On Pentium and later processors, moves to theCR0andCR4control registers are also serializing.[m]
On Pentium and later processors, moves to the DR0-DR7 debug registers are serializing.
Performs software interrupt #1 if executed when not using in-circuit emulation.[p]
Performs same operation asMOVif executed when not doing in-circuit emulation.[q]
UsingBSWAPwith a 16-bit register argument produces an undefined result.[a]
Instruction atomic only if used withLOCKprefix.
Instruction atomic only if used withLOCKprefix.
Instruction is serializing.
Integer/system instructions that were not present in the basic 80486 instruction set, but were added in various x86 processors prior to the introduction of SSE. (Discontinued instructionsare not included.)
Instruction is, with some exceptions, serializing.[c]
Instruction is serializing.
Instruction is serializing, and causes a mandatory #VMEXIT under virtualization.
Support forCPUIDcan be checked by toggling bit 21 ofEFLAGS(EFLAGS.ID) – if this bit can be toggled,CPUIDis present.
Instruction atomic only if used withLOCKprefix.[k]
In early processors, the TSC was a cycle counter, incrementing by 1 for each clock cycle (which could cause its rate to vary on processors that could change clock speed at runtime) – in later processors, it increments at a fixed rate that doesn't necessarily match the CPU clock speed.[n]
Other than AMD K7/K8, broadly unsupported in non-Intel processors released before 2005.[v][60]
These instructions are provided for software testing to explicitly generate invalid opcodes. The opcodes for these instructions are reserved for this purpose.
WRMSRto the x2APIC ICR (Interrupt Command Register; MSR830h) is commonly used to produce an IPI (Inter-processor interrupt) - on Intel[40]but not AMD[41]CPUs, such an IPI can be reordered before an older memory store.
For cases where there is a need to use more than 9 bytes of NOP padding, it is recommended to use multiple NOPs.
These instructions can only be encoded in 64 bit mode. They fall in four groups:
Most instructions with a 64 bit operand size encode this using aREX.Wprefix; in the absence of theREX.Wprefix,
the corresponding instruction with 32 bit operand size is encoded. This mechanism also applies to most other instructions with 32 bit operand
size. These are not listed here as they do not gain a new mnemonic in Intel syntax when used with a 64 bit operand size.
Bit manipulation instructions. For all of theVEX-encodedinstructions defined by BMI1 and BMI2, the operand size may be 32 or 64 bits, controlled by the VEX.W bit – none of these instructions are available in 16-bit variants. The VEX-encoded instructions are not available in Real Mode and Virtual-8086 mode - other than that, the bit manipulation instructions are available in all operating modes on supported CPUs.
Intel CET (Control-Flow Enforcement Technology) adds two distinct features to help protect against security exploits such asreturn-oriented programming: ashadow stack(CET_SS), andindirect branch tracking(CET_IBT).
The XSAVE instruction set extensions are designed to save/restore CPU extended state (typically for the purpose ofcontext switching) in a manner that can be extended to cover new instruction set extensions without the OS context-switching code needing to understand the specifics of the new extensions. This is done by defining a series ofstate-components, each with a size and offset within a given save area, and each corresponding to a subset of the state needed for one CPU extension or another. TheEAX=0DhCPUIDleaf is used to provide information about which state-components the CPU supports and what their sizes/offsets are, so that the OS can reserve the proper amount of space and set the associated enable-bits.
Instruction is serializing on AMD but not Intel CPUs.
The C-states are processor-specific power states, which do not necessarily correspond 1:1 toACPI C-states.
Any unsupported value in EAX causes an #UD exception.
Any unsupported value in the register argument causes a #GP exception.
Depending on function, the instruction may return data in RBX and/or an error code in EAX.
Depending on function, the instruction may return data/status information in EAX and/or RCX.
Instruction returns status information in EAX.
If the instruction fails, it will set EFLAGS.ZF=1 and return an error code in EAX. If it is successful, it sets EFLAGS.ZF=0 and EAX=0.
The register argument to theUMWAITandTPAUSEinstructions specifies extra flags to control the operation of the instruction.[q]
PopsRIP,RFLAGSandRSPoff the stack, in that order.[u]
Part of Intel DSA (Data Streaming AcceleratorArchitecture).[126]
The instruction differs from the olderWRMSRinstruction in that it is not serializing.
The instruction is not serializing.
Part of Intel TSE (Total Storage Encryption), and available in 64-bit mode only.
If the instruction fails, it will set EFLAGS.ZF=1 and return an error code in EAX. If it is successful, it sets EFLAGS.ZF=0 and EAX=0.
Intel XED uses the mnemonicshint-takenandhint-not-takenfor these branch hints.[115]
Any unsupported value in EAX causes a #GP exception.
Any unsupported value in EAX causes a #GP exception.TheEENTERandERESUMEfunctions cannot be executed inside an SGX enclave – the other functions can only be executed inside an enclave.
Any unsupported value in EAX causes a #GP exception.TheENCLVinstruction is only present on systems that support the EPC Oversubscription Extensions to SGX ("OVERSUB").
Any unsupported value in EAX causes a #GP(0) exception.
The value of the MSR is returned in EDX:EAX.
Unsupported values in ECX return 0.
Thex87coprocessor, if present, provides support for floating-point arithmetic. The coprocessor provides eight data registers, each holding one 80-bit floating-point value (1 sign bit, 15 exponent bits, 64 mantissa bits) – these registers are organized as a stack, with the top-of-stack register referred to as "st" or "st(0)", and the other registers referred to as st(1), st(2), ...st(7). It additionally provides a number of control and status registers, including "PC" (precision control, to control whether floating-point operations should be rounded to 24, 53 or 64 mantissa bits) and "RC" (rounding control, to pick rounding-mode: round-to-zero, round-to-positive-infinity, round-to-negative-infinity, round-to-nearest-even) and a 4-bit condition code register "CC", whose four bits are individually referred to as C0, C1, C2 and C3). Not all of the arithmetic instructions provided by x87 obey PC and RC.
C1 is set to the sign-bit of st(0), regardless of whether st(0) is Empty or not.
x86 also includes discontinued instruction sets which are no longer supported by Intel and AMD, and undocumented instructions which execute but are not officially documented.
The x86 CPUs containundocumented instructionswhich are implemented on the chips but not listed in some official documents. They can be found in various sources across the Internet, such asRalf Brown's Interrupt Listand atsandpile.org
Some of these instructions are widely available across many/most x86 CPUs, while others are specific to a narrow range of CPUs.
The actual operation isAH ← AL/imm8; AL ← AL mod imm8for any imm8 value (except zero, which produces a divide-by-zero exception).[143]
The actual operation isAL ← (AL+(AH*imm8)) & 0FFh; AH ← 0for any imm8 value.
Unavailable on some 80486 steppings.[146][147]
Introduced in the Pentium Pro in 1995, but remained undocumented until March 2006.[61][158][159]
Unavailable on AMD K6, AMD Geode LX, VIA Nehemiah.[161]
On AMD CPUs,0F 0D /rwith a memory argument is documented asPREFETCH/PREFETCHWsince K6-2 – originally as part of 3Dnow!, but has been kept in later AMD CPUs even after the rest of 3Dnow! was dropped.
Available on Intel CPUs since65 nmPentium 4.
Microsoft Windows 95 Setup is known to depend on0F FFbeing invalid[165][166]– it is used as a self check to test that its #UD exception handler is working properly.
Other invalid opcodes that are being relied on by commercial software to produce #UD exceptions includeFF FF(DIF-2,[167]LaserLok[168]) andC4 C4("BOP"[169][170]), however as of January 2022 they are not published as intentionally invalid opcodes.
STOREALL
In some implementations, emulated throughBIOSas ahaltingsequence.[173]
Ina forum post at the Vintage Computing Federation, this instruction (withF1prefix) is explained asSAVEALL. It interacts with ICE mode.
Opcode reused forSYSCALLin AMD K6 and later CPUs.
Opcode reused forSYSRETin AMD K6 and later CPUs.
Opcodes reused for SSE instructions in later CPUs.
The NexGen Nx586 CPU uses "hyper code"[180](x86 code sequences unpacked at boot time and only accessible in a special "hyper mode" operation mode, similar to DEC Alpha'sPALcodeand Intel's XuCode[181]) for many complicated operations that are implemented with microcode in most other x86 CPUs. The Nx586 provides a large number of undocumented instructions to assist hyper mode operation.
Instruction known to be recognized byMASM6.13 and 6.14.
Opcode reused for documentedPSWAPDinstruction from AMD K7 onwards.
64 0F (80..8F) rel16/32
Segment prefixes on conditional branches are accepted but ignored by non-NetBurst CPUs.
On at least AMD K6-2, all of the unassigned 3DNow! opcodes (other than the undocumentedPF2IW,PI2FWandPSWAPWinstructions) are reported to execute as equivalents ofPOR(MMX bitwise-OR instruction).[183]
GP2MEM
Supported by OpenSSL[191]as part of itsVIA PadLocksupport, and listed in a Zhaoxin-supplied Linux kernel patch,[192]but not documented by the VIA PadLock Programming Guide.
Listed in a VIA-supplied patch to add support for VIA Nano-specific PadLock instructions to OpenSSL,[193]but not documented by the VIA PadLock Programming Guide.
FENI8087_NOP
Present on all Intel x87 FPUs from 80287 onwards. For FPUs other than the ones where they were introduced on (8087 forFENI/FDISIand 80287 forFSETPM), they act asNOPs.
These instructions and their operation on modern CPUs are commonly mentioned in later Intel documentation, but with opcodes omitted and opcode table entries left blank (e.g.Intel SDM 325462-077, April 2022mentions them twice without opcodes).
The opcodes are, however, recognized by Intel XED.[199]
FDISI8087_NOP
FSETPM287_NOP
Their actual operation is not known, nor is it known whether their operation is the same on all of these CPUs.
|
https://en.wikipedia.org/wiki/X86_instruction_set
|
Instatistics, astudentized residualis thedimensionless ratioresulting from the division of aresidualby anestimateof itsstandard deviation, both expressed in the sameunits. It is a form of aStudent'st-statistic, with the estimate of error varying between points.
This is an important technique in the detection ofoutliers. It is among several named in honor ofWilliam Sealey Gosset, who wrote under the pseudonym "Student" (e.g.,Student's distribution). Dividing a statistic by asample standard deviationis calledstudentizing, in analogy withstandardizingandnormalizing.
The key reason for studentizing is that, inregression analysisof amultivariate distribution, the variances of theresidualsat different input variable values may differ, even if the variances of theerrorsat these different input variable values are equal. The issue is the difference betweenerrors and residuals in statistics, particularly the behavior of residuals in regressions.
Consider thesimple linear regressionmodel
Given a random sample (Xi,Yi),i= 1, ...,n, each pair (Xi,Yi) satisfies
where theerrorsεi{\displaystyle \varepsilon _{i}}, areindependentand all have the same varianceσ2{\displaystyle \sigma ^{2}}. Theresidualsare not the true errors, butestimates, based on the observable data. When the method of least squares is used to estimateα0{\displaystyle \alpha _{0}}andα1{\displaystyle \alpha _{1}}, then the residualsε^{\displaystyle {\widehat {\varepsilon \,}}}, unlike the errorsε{\displaystyle \varepsilon }, cannot be independent since they satisfy the two constraints
and
(Hereεiis theith error, andε^i{\displaystyle {\widehat {\varepsilon \,}}_{i}}is theith residual.)
The residuals, unlike the errors,do not all have the same variance:the variance decreases as the correspondingx-value gets farther from the averagex-value. This is not a feature of the data itself, but of the regression better fitting values at the ends of the domain. It is also reflected in theinfluence functionsof various data points on theregression coefficients: endpoints have more influence. This can also be seen because the residuals at endpoints depend greatly on the slope of a fitted line, while the residuals at the middle are relatively insensitive to the slope. The fact thatthe variances of the residuals differ,even thoughthe variances of the true errors are all equalto each other, is theprincipal reasonfor the need for studentization.
It is not simply a matter of the population parameters (mean and standard deviation) being unknown – it is thatregressionsyielddifferent residual distributionsatdifferent data points,unlikepointestimatorsofunivariate distributions, which share acommon distributionfor residuals.
For this simple model, thedesign matrixis
and thehat matrixHis the matrix of theorthogonal projectiononto the column space of the design matrix:
Theleveragehiiis theith diagonal entry in the hat matrix. The variance of theith residual is
In case the design matrixXhas only two columns (as in the example above), this is equal to
In the case of anarithmetic mean, the design matrixXhas only one column (avector of ones), and this is simply:
Given the definitions above, theStudentized residualis then
wherehiiis theleverage, andσ^{\displaystyle {\widehat {\sigma }}}is an appropriate estimate ofσ(see below).
In the case of a mean, this is equal to:
The usual estimate ofσ2is theinternally studentizedresidual
wheremis the number of parameters in the model (2 in our example).
But if theith case is suspected of being improbably large, then it would also not be normally distributed. Hence it is prudent to exclude theith observation from the process of estimating the variance when one is considering whether theith case may be an outlier, and instead use theexternally studentizedresidual, which is
based on all the residualsexceptthe suspectith residual. Here is to emphasize thatε^j2(j≠i){\displaystyle {\widehat {\varepsilon \,}}_{j}^{\,2}(j\neq i)}for suspectiare computed withith case excluded.
If the estimateσ2includestheith case, then it is called theinternally studentizedresidual,ti{\displaystyle t_{i}}(also known as thestandardized residual[1]).
If the estimateσ^(i)2{\displaystyle {\widehat {\sigma }}_{(i)}^{2}}is used instead,excludingtheith case, then it is called theexternally studentized,ti(i){\displaystyle t_{i(i)}}.
If the errors are independent andnormally distributedwithexpected value0 and varianceσ2, then theprobability distributionof theith externally studentized residualti(i){\displaystyle t_{i(i)}}is aStudent's t-distributionwithn−m− 1degrees of freedom, and can range from−∞{\displaystyle \scriptstyle -\infty }to+∞{\displaystyle \scriptstyle +\infty }.
On the other hand, the internally studentized residuals are in the range0±ν{\displaystyle 0\,\pm \,{\sqrt {\nu }}}, whereν=n−mis the number of residual degrees of freedom. Iftirepresents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then:[2]
wheretis a random variable distributed asStudent's t-distributionwithν− 1 degrees of freedom. In fact, this implies thatti2/νfollows thebeta distributionB(1/2,(ν− 1)/2).
The distribution above is sometimes referred to as thetau distribution;[2]it was first derived by Thompson in 1935.[3]
Whenν= 3, the internally studentized residuals areuniformly distributedbetween−3{\displaystyle \scriptstyle -{\sqrt {3}}}and+3{\displaystyle \scriptstyle +{\sqrt {3}}}.
If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, thetiare all either +1 or −1, with 50% chance for each.
The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all thetiof a particular experiment is 1.
For instance, the internally studentized residuals when fitting a straight line going through (0, 0) to the points (1, 4), (2, −1), (2, −1) are2,−5/5,−5/5{\displaystyle {\sqrt {2}},\ -{\sqrt {5}}/5,\ -{\sqrt {5}}/5}, and the standard deviation of these is not 1.
Note that any pair of studentized residualtiandtj(wherei≠j{\displaystyle i\neq j}),are NOT i.i.d. They have the same distribution, but are not independent due to constraints on the residuals having to sum to 0 and to have them be orthogonal to the design matrix.
Many programs and statistics packages, such asR,Python, etc., include implementations of Studentized residual.
|
https://en.wikipedia.org/wiki/Studentized_residual
|
MrSID(pronounced Mister Sid) is an acronym that stands formultiresolution seamless image database. It is afile format(filename extension.sid) developed and patented[2][3]by LizardTech (in October 2018 absorbed intoExtensis)[4]for encoding ofgeoreferencedraster graphics, such asorthophotos.
MrSID originated as the result of research efforts atLos Alamos National Laboratory(LANL).[5][6]
MrSID was originally developed forGeographic Information Systems(GIS).[5]With this format, largerasterimage files such asaerial photographsorsatellite imageryarecompressedand can be quickly viewed without having to decompress the entire file.[7]
The MrSID (.sid) format is supported in major GIS applications such asAutodesk,Bentley Systems,CARIS,ENVI,ERDAS,ESRI,Global Mapper,[8]Intergraph,MapInfo,QGIS[citation needed]andMiraMon[citation needed].
According to theOpen Source Geospatial Foundation(which releasesGDAL), MrSID was developed "under the aegis of the U.S. government forstoring fingerprintsfor theFBI."[9]
In a 1996 entry for the R&D 100 Awards, LANL identified other uses for the format: "it can be used as an efficient method for storing and retrievingphotographic archives; it can store and retrieve satellite data for consumer games and educational CD-ROMs; and it is well suited for use invehicle navigation systems. Moreover, MrSID holds promise for being used inimage compressionand editing fordesktop publishingandnonlinear digital video software."[5]
For certain downloadable images (such as maps),American Memoryat theLibrary of Congressbegan using MrSID in 1996; in January 2005 it also began usingJPEG 2000.[6]Depending on image content and color depth, compression of American Memory maps is typically better with MrSID, which on average achieves a compression ratio of approximately 22:1 versus the 20:1 achieved with JPEG 2000.[10]
Extensis offers a software package called GeoExpress to read and write MrSID files. They also provide a freeweb browserplug-in for theMicrosoft Windowsoperating system. (A Macintosh OS version of this viewer, introduced in 2005, was discontinued.) Most commercial GIS software packages can read some versions of MrSID files including those fromGE Smallworld,ESRI,Intergraph,Bentley Systems,MapInfo, Safe Software,Autodesk, withERDAS IMAGINEbeing able to both read and write MrSID files. GeoExpress can also generateJPEG 2000(.jp2) data. When combined with LizardTech's Express Server, .sid and .jp2 data can be served quickly to a variety of GIS applications and other client applications either through direct integrations or viaWMS.
There is noopen sourceimplementation of the MrSID format. Some open source GIS systems can read MrSID files, includingMapWindow GISand those based onGDAL. The Decode Software Development Kit (SDK) is made available as a free download from Extensis. This enables the capability to implement MrSID reading capability in any application.
Some image editing and management software systems can also read MrSID files, includingXnViewandIrfanView.
MrSID technology useslosslesswavelet compressionto create an initial image. Then the encoder divides the image into zoom levels, subbands, subblocks and bitplanes. After the initial encoding, the image creator can apply zero or more optimizations. While 2:1 compression ratios may be achieved losslessly, higher compression rates are lossy much likeJPEG-compresseddata.
MrSID uses selective decoding meaning that the decoder does not have to decode the entire file to view a specific zoom level, image quality or scene for example.
|
https://en.wikipedia.org/wiki/MrSID
|
Acontainer chassis, also calledintermodal chassisorskeletal trailer, is a type of semi-trailer designed to securely carry anintermodal container. Chassis are used bytruckersto deliver containers betweenports, railyards, container depots, and shipper facilities,[1]:2–3and are thus a key part of theintermodal supply chain.
The use of chassis to haul containers over-the-road is known asdrayage trucking, and is a section of intermodal, which also includes rail transport of containers using well or flat cars and overseas transport in ships or barges. Like other intermodal equipment, chassis are equipped withtwistlocksat each corner which allows a container (hoisted onto or off the chassis by acrane), to be locked on for secure transport or unlocked to be lifted off.[2]The length of a chassis corresponds to which container size will fit (i.e., a 40-foot-long chassis fits a 40-foot-long container), but some models are adjustable length.[3]
Semi-tractor truckshook up to chassis via thekingpin. When disconnected from a tractor, the chassis' landing gear can be cranked down to park it.[4]
Portablegenerators, also called gensets, can be mounted (underslung) onto chassis. These gensets are used to power arefrigerated container.[5]
The axle group on some chassis (especially 20-foot and 53-foot units) can be slid backwards or forwards to change the weight distribution of heavy containers, allowing safe operation and compliance withweight restrictions.
An identification number is often stenciled on chassis to track each unit in a fleet. According toISO 6346, a chassis should have the letter "Z" at the end of itsreporting mark.
A variation is the tank container chassis, which are used forISO tank containers. They are characteristically longer and have lower deck height then standard chassis, ideal for transporting constantly shifting payloads. These chassis can also be fitted with additional accessories including: lift kits to facilitate product discharge, hose tubes, and hi/lo kits to carry two empty tanks. They come in tandem axle, spread axle, tri-axle, and hi/lo combo configurations.
Unlike other countries where chassis are mostly owned or long-term leased by trucking companies, in the United States most chassis are currently owned by a few leasing companies (pools) which rent out the equipment to truckers.[1]:1[6]When a trucker leaves or enters a facility with a pool chassis, anelectronic data interchange(EDI) record is generated at the facility gate which identifies the trucking company and the chassis pool, and this allows the pool to invoice the appropriate trucking company for chassis usage. The system is influenced by the steamship lines and by the operation of container terminals. Firstly, containers are commonly stored on chassis as a single mounted unit at rail yards and depots—such terminals are known as "wheeled" facilities. Secondly, steamship lines offer a service called ″carrier haulage″ or ″store door delivery″, whereby they arrange the drayage of a customer’s container. The steamship line hires a local trucking company and pays the pool for the chassis usage.
As a result, steamship lines formed contractual agreements with the pools which entail that when a container is on-terminal it must be on a pool specified by the steamship line.[7]:26[8][9]This means that at wheeled facilities, containers are mounted onto chassis selected by the steamship line before the trucker arrives to pickup.[10]Some disadvantages of this system are that it can restrict truckers' choice of which chassis to use[11]and it can cause "chassis splits", which are when a container and its required chassis pool are in different locations.
In the United States, container chassis shortages are a chronic problem, especially during peaks in container volume.[12]There are several causes of chassis shortages, but a common problem is excessive off-terminal dwell time. Off-terminal dwell time is the length of time a shipper keeps a chassis/container at their premises. Long dwell times mean less free chassis on-site at ports and rail ramps.[13][14]
|
https://en.wikipedia.org/wiki/Container_chassis
|
ThePlatform for Internet Content Selection(PICS) was a specification created byW3Cthat usedmetadatato label webpages to help parents and teachers control what children and students could access on theInternet. The W3CProtocol for Web Description Resourcesproject integrates PICS concepts withRDF. PICS was superseded byPOWDER, which itself is no longer actively developed.[1]PICS often used content labeling from theInternet Content Rating Association, which has also been discontinued by the Family Online Safety Institute's board of directors.[2]An alternative self-rating system, named Voluntary Content Rating,[3]was devised by Solid Oak Software in 2010, in response to the perceived complexity of PICS.[4]
Internet Explorer 3was one of the early web browsers to offer support for PICS, released in 1996.Internet Explorer 5added a feature calledapproved sites, that allowed extra sites to be added to the list in addition to the PICS list when it was being used.[5]
ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Platform_for_Internet_Content_Selection
|
Apedagogical grammaris a modern approach inlinguisticsintended to aid in teaching an additional language.
This method of teaching is divided into the descriptive: grammatical analysis, and the prescriptive: the articulation of a set of rules. Following an analysis of the context in which it is to be used, one grammatical form or arrangement of words will be determined to be the most appropriate. It helps in learning the grammar of foreign languages. Pedagogical grammars typically require rules that are definite, coherent, non-technical, cumulative andheuristic.[1]As the rules themselves accumulate, anaxiomatic systemis formed between the two languages that should then enable a native speaker of the first to learn the second.[2]
This article aboutlanguage acquisitionis astub. You can help Wikipedia byexpanding it.
Thisgrammar-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Pedagogical_grammar
|
Decision listsare a representation for Boolean functions which can be easily learnable from examples.[1]Single term decision lists are more expressive thandisjunctionsandconjunctions; however, 1-term decision lists are less expressive than the generaldisjunctive normal formand theconjunctive normal form.
The language specified by a k-length decision list includes as a subset the language specified by a k-depthdecision tree.
Learning decision lists can be used forattribute efficient learning.[2]
A decision list (DL) of lengthris of the form:
wherefiis theith formula andbiis theithbooleanfori∈{1...r}{\displaystyle i\in \{1...r\}}. The last if-then-else is the default case, which means formulafris always equal to true. Ak-DL is a decision list where all of formulas have at mostkterms. Sometimes "decision list" is used to refer to a 1-DL, where all of the formulas are either a variable or itsnegation.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Decision_list
|
In mathematics, asemigroupis analgebraic structureconsisting of asettogether with anassociativeinternalbinary operationon it.
The binary operation of a semigroup is most often denoted multiplicatively (just notation, not necessarily the elementary arithmeticmultiplication):x⋅y, or simplyxy, denotes the result of applying the semigroup operation to theordered pair(x,y). Associativity is formally expressed as that(x⋅y) ⋅z=x⋅ (y⋅z)for allx,yandzin the semigroup.
Semigroups may be considered a special case ofmagmas, where the operation is associative, or as a generalization ofgroups, without requiring the existence of an identity element or inverses.[a]As in the case of groups or magmas, the semigroup operation need not becommutative, sox⋅yis not necessarily equal toy⋅x; a well-known example of an operation that is associative but non-commutative ismatrix multiplication. If the semigroup operation is commutative, then the semigroup is called acommutative semigroupor (less often than in theanalogous case of groups) it may be called anabelian semigroup.
Amonoidis an algebraic structure intermediate between semigroups and groups, and is a semigroup having anidentity element, thus obeying all but one of the axioms of a group: existence of inverses is not required of a monoid. A natural example isstringswithconcatenationas the binary operation, and the empty string as the identity element. Restricting to non-emptystringsgives an example of a semigroup that is not a monoid. Positiveintegerswith addition form a commutative semigroup that is not a monoid, whereas the non-negativeintegersdo form a monoid. A semigroup without an identity element can be easily turned into a monoid by just adding an identity element. Consequently, monoids are studied in the theory of semigroups rather than in group theory. Semigroups should not be confused withquasigroups, which are generalization of groups in a different direction; the operation in a quasigroup need not be associative but quasigroupspreserve from groupsthe notion ofdivision. Division in semigroups (or in monoids) is not possible in general.
The formal study of semigroups began in the early 20th century. Early results includea Cayley theorem for semigroupsrealizing any semigroup as atransformation semigroup, in which arbitrary functions replace the role of bijections in group theory. A deep result in the classification of finite semigroups isKrohn–Rhodes theory, analogous to theJordan–Hölder decompositionfor finite groups. Some other techniques for studying semigroups, likeGreen's relations, do not resemble anything in group theory.
The theory of finite semigroups has been of particular importance intheoretical computer sciencesince the 1950s because of the natural link between finite semigroups andfinite automatavia thesyntactic monoid. Inprobability theory, semigroups are associated withMarkov processes.[1]In other areas ofapplied mathematics, semigroups are fundamental models forlinear time-invariant systems. Inpartial differential equations, a semigroup is associated to any equation whose spatial evolution is independent of time.
There are numerousspecial classes of semigroups, semigroups with additional properties, which appear in particular applications. Some of these classes are even closer to groups by exhibiting some additional but not all properties of a group. Of these we mention:regular semigroups,orthodox semigroups,semigroups with involution,inverse semigroupsandcancellative semigroups. There are also interesting classes of semigroups that do not contain any groups except thetrivial group; examples of the latter kind arebandsand their commutative subclass –semilattices, which are alsoordered algebraic structures.
A semigroup is asetStogether with abinary operation⋅ (that is, afunction⋅ :S×S→S) that satisfies theassociative property:
More succinctly, a semigroup is an associativemagma.
Aleft identityof a semigroupS(or more generally,magma) is an elementesuch that for allxinS,e⋅x=x. Similarly, aright identityis an elementfsuch that for allxinS,x⋅f=x. Left and right identities are both calledone-sided identities. A semigroup may have one or more left identities but no right identity, and vice versa.
Atwo-sided identity(or justidentity) is an element that is both a left and right identity. Semigroups with a two-sided identity are calledmonoids. A semigroup may have at most one two-sided identity. If a semigroup has a two-sided identity, then the two-sided identity is the only one-sided identity in the semigroup. If a semigroup has both a left identity and a right identity, then it has a two-sided identity (which is therefore the unique one-sided identity).
A semigroupSwithout identity may beembeddedin a monoid formed by adjoining an elemente∉StoSand defininge⋅s=s⋅e=sfor alls∈S∪ {e}.[2][3]The notationS1denotes a monoid obtained fromSby adjoining an identityif necessary(S1=Sfor a monoid).[3]
Similarly, every magma has at most oneabsorbing element, which in semigroup theory is called azero. Analogous to the above construction, for every semigroupS, one can defineS0, a semigroup with 0 that embedsS.
The semigroup operation induces an operation on the collection of its subsets: given subsetsAandBof a semigroupS, their productA·B, written commonly asAB, is the set{ab|a∈Aandb∈B}.(This notion is defined identically asit is for groups.) In terms of this operation, a subsetAis called
IfAis both a left ideal and a right ideal then it is called anideal(or atwo-sided ideal).
IfSis a semigroup, then the intersection of any collection of subsemigroups ofSis also a subsemigroup ofS.
So the subsemigroups ofSform acomplete lattice.
An example of a semigroup with no minimal ideal is the set of positive integers under addition. The minimal ideal of acommutativesemigroup, when it exists, is a group.
Green's relations, a set of fiveequivalence relationsthat characterise the elements in terms of theprincipal idealsthey generate, are important tools for analysing the ideals of a semigroup and related notions of structure.
The subset with the property that every element commutes with any other element of the semigroup is called thecenterof the semigroup.[4]The center of a semigroup is actually a subsemigroup.[5]
Asemigrouphomomorphismis a function that preserves semigroup structure. A functionf:S→Tbetween two semigroups is a homomorphism if the equation
holds for all elementsa,binS, i.e. the result is the same when performing the semigroup operation after or before applying the mapf.
A semigroup homomorphism between monoids preserves identity if it is amonoid homomorphism. But there are semigroup homomorphisms that are not monoid homomorphisms, e.g. the canonical embedding of a semigroupSwithout identity intoS1. Conditions characterizing monoid homomorphisms are discussed further. Letf:S0→S1be a semigroup homomorphism. The image offis also a semigroup. IfS0is a monoid with an identity elemente0, thenf(e0) is the identity element in the image off. IfS1is also a monoid with an identity elemente1ande1belongs to the image off, thenf(e0) =e1, i.e.fis a monoid homomorphism. Particularly, iffissurjective, then it is a monoid homomorphism.
Two semigroupsSandTare said to beisomorphicif there exists abijectivesemigroup homomorphismf:S→T. Isomorphic semigroups have the same structure.
Asemigroup congruence~ is anequivalence relationthat is compatible with the semigroup operation. That is, a subset~ ⊆S×Sthat is an equivalence relation andx~yandu~vimpliesxu~yvfor everyx,y,u,vinS. Like any equivalence relation, a semigroup congruence ~ inducescongruence classes
and the semigroup operation induces a binary operation ∘ on the congruence classes:
Because ~ is a congruence, the set of all congruence classes of ~ forms a semigroup with ∘, called thequotient semigrouporfactor semigroup, and denotedS/ ~. The mappingx↦ [x]~is a semigroup homomorphism, called thequotient map,canonicalsurjectionorprojection; ifSis a monoid then quotient semigroup is a monoid with identity [1]~. Conversely, thekernelof any semigroup homomorphism is a semigroup congruence. These results are nothing more than a particularization of thefirst isomorphism theorem in universal algebra. Congruence classes and factor monoids are the objects of study instring rewriting systems.
Anuclear congruenceonSis one that is the kernel of an endomorphism ofS.[6]
A semigroupSsatisfies themaximal condition on congruencesif any family of congruences onS, ordered by inclusion, has a maximal element. ByZorn's lemma, this is equivalent to saying that theascending chain conditionholds: there is no infinite strictly ascending chain of congruences onS.[7]
Every idealIof a semigroup induces a factor semigroup, theRees factor semigroup, via the congruence ρ defined byxρyif eitherx=y, or bothxandyare inI.
The following notions[8]introduce the idea that a semigroup is contained in another one.
A semigroupTis a quotient of a semigroupSif there is a surjective semigroup morphism fromStoT. For example,(Z/2Z, +)is a quotient of(Z/4Z, +), using the morphism consisting of taking the remainder modulo 2 of an integer.
A semigroupTdivides a semigroupS, denotedT≼SifTis a quotient of a subsemigroupS. In particular, subsemigroups ofSdividesT, while it is not necessarily the case that there are a quotient ofS.
Both of those relations are transitive.
For any subsetAofSthere is a smallest subsemigroupTofSthat containsA, and we say thatAgeneratesT. A single elementxofSgenerates the subsemigroup{xn|n∈Z+}. If this is finite, thenxis said to be offinite order, otherwise it is ofinfinite order.
A semigroup is said to beperiodicif all of its elements are of finite order.
A semigroup generated by a single element is said to bemonogenic(orcyclic). If a monogenic semigroup is infinite then it is isomorphic to the semigroup of positiveintegerswith the operation of addition.
If it is finite and nonempty, then it must contain at least oneidempotent.
It follows that every nonempty periodic semigroup has at least one idempotent.
A subsemigroup that is also a group is called asubgroup. There is a close relationship between the subgroups of a semigroup and its idempotents. Each subgroup contains exactly one idempotent, namely the identity element of the subgroup. For each idempotenteof the semigroup there is a unique maximal subgroup containinge. Each maximal subgroup arises in this way, so there is a one-to-one correspondence between idempotents and maximal subgroups. Here the termmaximal subgroupdiffers from its standard use in group theory.
More can often be said when the order is finite. For example, every nonempty finite semigroup is periodic, and has a minimalidealand at least one idempotent. The number of finite semigroups of a given size (greater than 1) is (obviously) larger than the number of groups of the same size. For example, of the sixteen possible "multiplication tables" for a set of two elements{a,b}, eight form semigroups[b]whereas only four of these are monoids and only two form groups. For more on the structure of finite semigroups, seeKrohn–Rhodes theory.
There is a structure theorem for commutative semigroups in terms ofsemilattices.[10]A semilattice (or more precisely a meet-semilattice)(L, ≤)is apartially ordered setwhere every pair of elementsa,b∈Lhas agreatest lower bound, denoteda∧b. The operation ∧ makesLinto a semigroup that satisfies the additionalidempotencelawa∧a=a.
Given a homomorphismf:S→Lfrom an arbitrary semigroup to a semilattice, each inverse imageSa=f−1{a}is a (possibly empty) semigroup. Moreover,SbecomesgradedbyL, in the sense thatSaSb⊆Sa∧b.
Iffis onto, the semilatticeLis isomorphic to thequotientofSby the equivalence relation ~ such thatx~yif and only iff(x) =f(y). This equivalence relation is a semigroup congruence, as defined above.
Whenever we take the quotient of a commutative semigroup by a congruence, we get another commutative semigroup. The structure theorem says that for any commutative semigroupS, there is a finest congruence ~ such that the quotient ofSby this equivalence relation is a semilattice. Denoting this semilattice byL, we get a homomorphismffromSontoL. As mentioned,Sbecomes graded by this semilattice.
Furthermore, the componentsSaare allArchimedean semigroups. An Archimedean semigroup is one where given any pair of elementsx,y, there exists an elementzandn> 0such thatxn=yz.
The Archimedean property follows immediately from the ordering in the semilatticeL, since with this ordering we havef(x) ≤f(y)if and only ifxn=yzfor somezandn> 0.
Thegroup of fractionsorgroup completionof a semigroupSis thegroupG=G(S)generated by the elements ofSas generators and all equationsxy=zthat hold true inSasrelations.[11]There is an obvious semigroup homomorphismj:S→G(S)that sends each element ofSto the corresponding generator. This has auniversal propertyfor morphisms fromSto a group:[12]given any groupHand any semigroup homomorphismk:S→H, there exists a uniquegroup homomorphismf:G→Hwithk=fj. We may think ofGas the "most general" group that contains a homomorphic image ofS.
An important question is to characterize those semigroups for which this map is an embedding. This need not always be the case: for example, takeSto be the semigroup of subsets of some setXwithset-theoretic intersectionas the binary operation (this is an example of a semilattice). SinceA.A=Aholds for all elements ofS, this must be true for all generators ofG(S) as well, which is therefore thetrivial group. It is clearly necessary for embeddability thatShave thecancellation property. WhenSis commutative this condition is also sufficient[13]and theGrothendieck groupof the semigroup provides a construction of the group of fractions. The problem for non-commutative semigroups can be traced to the first substantial paper on semigroups.[14][15]Anatoly Maltsevgave necessary and sufficient conditions for embeddability in 1937.[16]
Semigroup theory can be used to study some problems in the field ofpartial differential equations. Roughly speaking, the semigroup approach is to regard a time-dependent partial differential equation as anordinary differential equationon a function space. For example, consider the following initial/boundary value problem for theheat equationon the spatialinterval(0, 1) ⊂Rand timest≥ 0:
LetX=L2((0, 1)R)be theLpspaceof square-integrable real-valued functions with domain the interval(0, 1)and letAbe the second-derivative operator withdomain
whereH2{\displaystyle H^{2}}is aSobolev space. Then the above initial/boundary value problem can be interpreted as an initial value problem for an ordinary differential equation on the spaceX:
On an heuristic level, the solution to this problem "ought" to beu(t)=exp(tA)u0.{\displaystyle u(t)=\exp(tA)u_{0}.}However, for a rigorous treatment, a meaning must be given to theexponentialoftA. As a function oft, exp(tA) is a semigroup of operators fromXto itself, taking the initial stateu0at timet= 0to the stateu(t) = exp(tA)u0at timet. The operatorAis said to be theinfinitesimal generatorof the semigroup.
The study of semigroups trailed behind that of other algebraic structures with more complex axioms such asgroupsorrings. A number of sources[17][18]attribute the first use of the term (in French) to J.-A. de Séguier inÉlements de la Théorie des Groupes Abstraits(Elements of the Theory of Abstract Groups) in 1904. The term is used in English in 1908 in Harold Hinton'sTheory of Groups of Finite Order.
Anton Sushkevichobtained the first non-trivial results about semigroups. His 1928 paper "Über die endlichen Gruppen ohne das Gesetz der eindeutigen Umkehrbarkeit" ("On finite groups without the rule of unique invertibility") determined the structure of finitesimple semigroupsand showed that the minimal ideal (orGreen's relationsJ-class) of a finite semigroup is simple.[18]From that point on, the foundations of semigroup theory were further laid byDavid Rees,James Alexander Green,Evgenii Sergeevich Lyapin[fr],Alfred H. CliffordandGordon Preston. The latter two published a two-volume monograph on semigroup theory in 1961 and 1967 respectively. In 1970, a new periodical calledSemigroup Forum(currently published bySpringer Verlag) became one of the few mathematical journals devoted entirely to semigroup theory.
Therepresentation theoryof semigroups was developed in 1963 byBoris Scheinusingbinary relationson a setAandcomposition of relationsfor the semigroup product.[19]At an algebraic conference in 1972 Schein surveyed the literature on BA, the semigroup of relations onA.[20]In 1997 Schein andRalph McKenzieproved that every semigroup is isomorphic to a transitive semigroup of binary relations.[21]
In recent years researchers in the field have become more specialized with dedicated monographs appearing on important classes of semigroups, likeinverse semigroups, as well as monographs focusing on applications inalgebraic automata theory, particularly for finite automata, and also infunctional analysis.
If the associativity axiom of a semigroup is dropped, the result is amagma, which is nothing more than a setMequipped with abinary operationthat is closedM×M→M.
Generalizing in a different direction, ann-ary semigroup(alson-semigroup,polyadic semigroupormultiary semigroup) is a generalization of a semigroup to a setGwith an-ary operationinstead of a binary operation.[22]The associative law is generalized as follows: ternary associativity is(abc)de=a(bcd)e=ab(cde), i.e. the stringabcdewith any three adjacent elements bracketed.n-ary associativity is a string of lengthn+ (n− 1)with anynadjacent elements bracketed. A 2-ary semigroup is just a semigroup. Further axioms lead to ann-ary group.
A third generalization is thesemigroupoid, in which the requirement that the binary relation be total is lifted. As categories generalize monoids in the same way, a semigroupoid behaves much like a category but lacks identities.
Infinitary generalizations of commutative semigroups have sometimes been considered by various authors.[c]
|
https://en.wikipedia.org/wiki/Semigroup
|
AnHTTPS Bicycle Attackrefers to a method of discoveringpasswordlength onpacketsencrypted withTLS/SSL protocols.[1]In preparation for a bicycle attack, the attacker must load the target page to compute the sizes of headers in the request made by a given web browser to theserver. Once the attacker intercepts andbrowser fingerprintsa victim's request, the length of the password can be deduced by subtracting known header lengths from the total length of the request.[2]
The term was first coined on December 30, 2015 by Guido Vranken, who wrote:
"The name TLS Bicycle Attack was chosen because of the conceptual similarity between how encryption hides content and gift wrapping hides physical objects. My attack relies heavily on the property ofstream-based ciphersin TLS that the size of TLS application data payloads is directly known to the attacker and this inadvertently reveals information about theplaintextsize;similar to how a draped or gift-wrapped bicycle is still identifiable as a bicycle, because cloaking it like that retains the underlying shape.The reason that I've named this attack at all is only to make referring to it easier for everyone."[2][emphasis added]
The bicycle attack makesbrute-forcingof passwords much easier, because only passwords of the known length need to be tested. It demonstrates thatTLS-encryptedHTTPtraffic does not completely obscure the exact size of its content.
|
https://en.wikipedia.org/wiki/Bicycle_attack
|
Acorrelation functionis afunctionthat gives the statisticalcorrelationbetweenrandom variables, contingent on the spatial or temporal distance between those variables.[1]If one considers the correlation function between random variables representing the same quantity measured at two different points, then this is often referred to as anautocorrelation function, which is made up ofautocorrelations. Correlation functions of different random variables are sometimes calledcross-correlation functionsto emphasize that different variables are being considered and because they are made up ofcross-correlations.
Correlation functions are a useful indicator of dependencies as a function of distance in time or space, and they can be used to assess the distance required between sample points for the values to be effectively uncorrelated. In addition, they can form the basis of rules for interpolating values at points for which there are no observations.
Correlation functions used inastronomy,financial analysis,econometrics, andstatistical mechanicsdiffer only in the particular stochastic processes they are applied to. Inquantum field theorythere arecorrelation functions over quantum distributions.
For possibly distinct random variablesX(s) andY(t) at different pointssandtof some space, the correlation function is
wherecorr{\displaystyle \operatorname {corr} }is described in the article oncorrelation. In this definition, it has been assumed that the stochastic variables are scalar-valued. If they are not, then more complicated correlation functions can be defined. For example, ifX(s) is arandom vectorwithnelements andY(t) is a vector withqelements, then ann×qmatrix of correlation functions is defined withi,j{\displaystyle i,j}element
Whenn=q, sometimes thetraceof this matrix is focused on. If theprobability distributionshave any target space symmetries, i.e. symmetries in the value space of the stochastic variable (also calledinternal symmetries), then the correlation matrix will have induced symmetries. Similarly, if there are symmetries of the space (or time) domain in which the random variables exist (also calledspacetime symmetries), then the correlation function will have corresponding space or time symmetries. Examples of important spacetime symmetries are —
Higher order correlation functions are often defined. A typical correlation function of ordernis (the angle brackets represent theexpectation value)
If the random vector has only one component variable, then the indicesi,j{\displaystyle i,j}are redundant. If there are symmetries, then the correlation function can be broken up intoirreducible representationsof the symmetries — both internal and spacetime.
With these definitions, the study of correlation functions is similar to the study ofprobability distributions. Many stochastic processes can be completely characterized by their correlation functions; the most notable example is the class ofGaussian processes.
Probability distributions defined on a finite number of points can always be normalized, but when these are defined over continuous spaces, then extra care is called for. The study of such distributions started with the study ofrandom walksand led to the notion of theItō calculus.
The Feynmanpath integralin Euclidean space generalizes this to other problems of interest tostatistical mechanics. Any probability distribution which obeys a condition on correlation functions calledreflection positivityleads to a localquantum field theoryafterWick rotationtoMinkowski spacetime(seeOsterwalder-Schrader axioms). The operation ofrenormalizationis a specified set of mappings from the space of probability distributions to itself. Aquantum field theoryis called renormalizable if this mapping has a fixed point which gives a quantum field theory.
|
https://en.wikipedia.org/wiki/Correlation_function
|
Avirtual assistant(typically abbreviated toVA, also called avirtual office assistant)[1]is generally self-employed and providesprofessionaladministrative, technical, or creative (social) assistance to clients remotely from ahome office.[2]Because virtual assistants are independent contractors rather than employees, clients are not responsible for any employee-related taxes, insurance, or benefits, except in the context that those indirect expenses are included in the VA's fees. Clients also avoid the logistical problem of providing extra office space, equipment, or supplies. Clients pay for 100% productive work and can work with virtual assistants, individually, or in multi-VA firms to meet their exact needs. Virtual assistants usually work for othersmall businesses[3]but can also support busy executives. It is estimated that there are as few as 5,000 to 10,000 or as many as 25,000 virtual assistants worldwide. The profession is growing in centralized economies with "fly-in fly-out" staffing practices.[4][5][6]
In terms of pay, according to Glassdoor, the annual salary for virtual assistants in the US is $35,922.[7]However, worldwide, many virtual assistants work as freelancers on an hourly wage. One recent survey involving 400 virtual assistants on the popular freelancer siteUpworkshows a huge discrepancy in hourly pay commanded by virtual assistants in different countries.[8]
Common modes of communication and data delivery include the internet, e-mail and phone-call conferences,[9]online workspaces, and fax machine. Increasingly, virtual assistants are utilizing technology such asSkypeand Zoom, Slack, as well asGoogle Voice. Professionals in this business work on a contractual basis, and long-lasting cooperation is standard. Typically, office administrative experience is expected in positions such as executive assistant, office manager/supervisor, secretary, legal assistant, paralegal, legal secretary, real estate assistant, and information technology.
In recent years, virtual assistants have also worked their way into many mainstream businesses, and with the advent ofVOIPservices such as Skype and Zoom, it has been possible to have a virtual assistant who can answer phone calls remotely, without the end user's knowledge. This allows businesses to add a personal touch in the form of a receptionist, without the additional cost of hiring someone.[citation needed]
Virtual assistants consist of individuals as well as companies who work remotely as independent professionals, providing a wide range of products and services, both to businesses and consumers. Virtual assistants perform many different roles, including typical secretarial work, website editing, social media marketing, customer service, data entry, accounts (MYOB, Quick books), and many other remote tasks.
Virtual assistants come from a variety of business backgrounds, but most have several years' experience earned in the "real" (non-virtual) business world, or several years' experience working online or remotely.
A dedicated virtual assistant is someone working in the office under the management of a company. The facility and internet connection as well as training are provided by the company, though not in all cases. The home-based virtual assistant works either in anoffice sharingenvironment or from home. General VAs are sometimes called an online administrative assistant, online personal assistant, or online sales assistant. A virtual webmaster assistant, virtual marketing assistant, and virtual content writing assistant are specific professionals that are usually experienced employees from corporate environments who have set up their own virtual offices.
Virtual assistants were an integral part of the 2007 bestselling bookThe 4-Hour WorkweekbyTim Ferriss.[10]Ferriss claimed to have hired virtual assistants to check his email, pay his bills, and run parts of his company.[11]
|
https://en.wikipedia.org/wiki/Virtual_assistant_(occupation)
|
Virtual volunteeringrefers tovolunteeractivities completed, in whole or in part, using theInternetand a home, school buildings, telecenter, or work computer or other Internet-connected device, such as asmartphoneor atablet.[1]Virtual volunteering is also known asonline volunteering,remote volunteeringore-volunteering. Contributing tofree and open source softwareprojects or editingWikipediaare examples of virtual volunteering.[2]
In one study,[3]over 70 percent of online volunteers chose assignments requiring one to five hours a week and nearly half chose assignments lasting 12 weeks or less. Some organizations offer online volunteering opportunities which last from ten minutes to an hour. A unique feature of online volunteering is that it can be done from a distance. People with restricted mobility or other special needs participate in ways that might not be possible in traditional face-to-face volunteering. Likewise, online volunteering may allow people to overcome social inhibitions andsocial anxiety, particularly if they would normally experience disability-related labeling or stereotyping. This empowers people who might not otherwise volunteer. It can buildself-confidenceandself-esteemwhile enhancing skills and extending networks and social ties. Online volunteering also allows participants to adapt their program of volunteer work to their unique skills and passions.[4]
People engaged in virtual volunteering undertake a variety of activities from locations remote to the organization or people they are assisting, via a computer or other Internet-connected device, such as:
In the developing world, innovative synergies between volunteerism and technology typically focus onmobile communicationtechnologies rather than the Internet. Around 26 per cent of people worldwide had Internet access in 2009. However, Internet penetration in low-income countries was only 18 per cent, compared to over 64 per cent in developed countries. While the costs of fixed broadband Internet are falling, access still remains unaffordable to many.[8]Despite this, online volunteering is developing rapidly. Online volunteers are "people who commit their time and skills over the Internet, freely and without financial considerations, for the benefit of society."[9][full citation needed]Online volunteering has eliminated the need for volunteerism to be tied to specific times and locations. Thus, it greatly increases the freedom and flexibility of volunteer engagement and complements the outreach and impact of volunteers serving in situ. Most online volunteers engage in operational and managerial activities such as fundraising, technological support, communications, marketing and consulting. Increasingly, they also engage in activities such as research and
writing and leading e-mail discussion groups.[4]
Onlinemicro-volunteeringis also an example of virtual volunteering andcrowdsourcing, where volunteers undertake assignments via their smart devices . These volunteers either are not required to undergo any screening or training by the nonprofit for such tasks, and do not have to make any other commitment when a micro-task is completed, or, have already undergone screening or training by the nonprofit, and are therefore approved to take on micro-tasks as their availability and interests allow. Online micro-volunteering was originally called "byte-sized volunteering" by the Virtual Volunteering Project, and has always been a part of the more than 30-year-old practice of online volunteering.[10]An early example of both micro-volunteering and crowdsourcing isClickWorkers, a small NASA project begun in 2001 that engaged online volunteers in scientific-related tasks that required just a person's perception and common sense, but not scientific training, such as identifying craters on Mars in photos the project posted online; volunteers were not trained or screened before participating. The phrase "micro-volunteering" is usually credited to a San Francisco-based nonprofit called The Extraordinaries.[11][12][13]
The practice of virtual volunteering to benefit nonprofit initiatives dates back to at least the early 1970s, whenProject Gutenbergbegan involving online volunteers to provide electronic versions of works in the public domain.[14]
In 1995, a newnonprofit organizationcalled Impact Online (now calledVolunteerMatch), based in Palo Alto, California, began promoting the idea of "virtual volunteers".[15]In 1996, Impact Online received a grant from theJames Irvine Foundationto launch an initiative to research the practice of virtual volunteering and to promote the practice to nonprofit organizations in the US. This new initiative was dubbed theVirtual Volunteering Project, and the web site was launched in early 1997.[16]After one year of operations, the Virtual Volunteering Project moved to the Charles A. Dana Center atThe University of Texas at Austin. In 2002, the Virtual Volunteering Project moved within the university to theLyndon B. Johnson School of Public Affairs. The first two years of the Virtual Volunteer Project were spent reviewing and adaptingremote workmanuals[17]and existing volunteer management guidelines with regards to virtual volunteering, as well as identifying organizations that were involving online volunteers. By April 1999, almost 100 organizations had been identified by the Virtual Volunteering Project as involving online volunteers and were listed on the web site.[18]Due to the growing numbers of nonprofit organizations, schools, government programs and other not-for-profit entities involving online volunteers, the Virtual Volunteering Project stopped listing every such organization involving online volunteers on its web site in 2000, and focused its efforts on promoting the practice, profiling organizations with large or unique online volunteering programs, and creating guidelines for the involvement of online volunteers. Until January 2001, the Virtual Volunteering Project listed all telementoring and teletutoring programs in the USA (programs where online volunteers mentor or tutor others, through a nonprofit organization or school). At that time, 40 were identified.[19]
In August 1999, theNetAid.orginitiative was launched.[20]The initiative included an online volunteering component, today known as theUN Online Volunteering service. It went live in 2000 and has been managed byUnited Nations Volunteerssince its inception. It quickly attracted a high number of people ready to support organizations working for development. In 2003, several thousand people already contributed to the UN's Online Volunteering service – volunteers with very diverse backgrounds, including university graduates, private sector employees, and retirees.[21]While the UN's Online Volunteering service became independent, NetAid continued as a joint project ofUNDPand Cisco Systems. It aimed "to utilize the unique networking capabilities of the Internet to promote development and alleviate extreme poverty across the world".[22]
Online volunteering has been adopted by thousands of nonprofit organizations and other initiatives.[14]There is no organization currently tracking best practices in online volunteering in the USA or worldwide, how many people are engaged in online volunteering, or how many organizations utilize online volunteers, and studies regarding volunteering, such as reports on volunteering trends in the USA, rarely include information about online volunteering (for example, a search of the termvirtual volunteeringon theCorporation for National Service's "Volunteering in America" yields no results.[23]On IVCO's Forum Discussion Paper 2015[24]it is recommended that a collective measurement tool developed as part of a global measurement framework should also capture online volunteering.
TheUN's Online Volunteering serviceconnects organizations working in or for the developing world with online volunteers. It does have statistics available regarding numbers of online volunteers and involving organizations (i.e. NGOs, other civil society organizations, a government or other public institutions, United Nations agencies or other intergovernmental institutions) that collaborate online via their platform. In 2013, all 17,370 online volunteering assignments offered by development organizations through the Online Volunteering service attracted applications from numerous qualified volunteers. About 58 percent of the 11,037 online volunteers were women, and 60 percent came from developing countries; on average, they were 30 years of age. More than 94 percent of organizations and online volunteers rated their collaboration as good or excellent in 2013.[25]Forcivil society organizationswith limited resources in particular, the impact of online volunteer engagement is significant: 41% involve UN Online Volunteers for technical expertise that is not available internally. According to the same impact evaluation carried out in 2014, in many instances, organizations without access to online volunteers would have difficulties achieving their own peace and development outcomes.[26]
In July 2016, UNV unveiled a redesigned website and launched two additional services: The 1-click query to allow organizations to reach out to half a million people to provide real-time data for their projects, and its new employee online volunteering solution for global companies. Inclusive multi-stakeholder partnerships emerged as a necessity to achieve theSustainable Development Goals(SDGs), and the first private sector partner of the Online Volunteering service is based in Brazil (Samsung ElectronicsLatin American Office).[27]
Several other matching services, such asVolunteerMatchandIdealist, also offer virtual volunteering positions with nonprofit organizations in addition to traditional, on-site volunteering opportunities. VolunteerMatch currently reports that about 5 percent of its active volunteer listings are virtual in nature. As of June 2010, its directory included more than 2,770 such listings including roles in interactive marketing, fundraising, accounting, social media, and business mentoring. The percentage of virtual listings has dropped since 2006, when it peaked at close to 8 percent of overall volunteer opportunities in the VolunteerMatch system.
Wikipediaand otherWikimedia Foundationendeavors are examples of online volunteering, in the form of crowdsourcing or micro-volunteering; the majority of Wikipedia contributing volunteers are not required to undergo any screening or training by the nonprofit for their role as editors, and do not have to make a specific time commitment to the organization in order to contribute service.
Many organizations involved in virtual volunteering might never mention the term, or the words "online volunteer," on their web sites or in organizational literature. For example, the nonprofit organization Business Council for Peace (Bpeace) recruits business professionals to donate their time mentoring entrepreneurs in conflict-affected countries, includingAfghanistanandRwanda, but the majority of these volunteers interact with Bpeace staff and entrepreneurs online rather than face-to-face; yet, the term virtual volunteering is not mentioned on the web site. Bpeace also engages in online micro-volunteering, asking for information leads from its supporters, such as where to find online communities of particular professionals in the USA, but the organization never mentions the term micro-volunteering on its web site. Another example is theElectronic Emissary, one of the first K-12 online mentoring programs, launched in 1992; the web site does not use the phrase virtual volunteering and prefers to call online volunteers onlinesubject-matter experts.
Rumie, an edtech non-profit organization also uses subject-matter experts, as well as corporate partners and leading non-profit organizations to create interactive learning modules centered on life skills and career development called Bytes. Rumie is an example of how virtual volunteering can offer an experience that is impactful on various levels. Rumie-Build, Rumie's microlearning authoring platform allows volunteers to work individually or in teams to create these Bytes. Filled with built-in guidance and prompts to support authors in creating quality content, real-time collaboration capabilities, and multimedia integration, Rumie-Build is the tool that facilitates a digital skills-based volunteer opportunity that feels effortless and fun, often helping volunteers develop their own knowledge in the process. The created Bytes are used by learners around the world to increase their skills, empowering them to achieve their full potential.
Evolving forms of volunteerism will enhance opportunities for people to volunteer. The spread of technology connects ever more rural and isolated areas. NGOs and governments are beginning to realise the value of South-to-South international volunteerism, as well as diaspora volunteering, and are dedicating resources to these schemes. Corporations are responding to the "social marketplace" by supportingCSRinitiatives that include volunteerism. New opportunities for engaging in volunteerism are opening up with the result that more people are becoming involved and those already participating can expand their commitment.[4]A phenomenon that is still quite new, but growing rapidly, is the formal integration of online employee volunteering programmes into the infrastructure and business plan of companies.
|
https://en.wikipedia.org/wiki/Virtual_volunteering
|
Fortran(/ˈfɔːrtræn/; formerlyFORTRAN) is athird-generation,compiled,imperativeprogramming languagethat is especially suited tonumeric computationandscientific computing.
Fortran was originally developed byIBMwith a reference manual being released in 1956;[3]however, the first compilers only began to produce accurate code two years later.[4]Fortrancomputer programshave been written to support scientific and engineering applications, such asnumerical weather prediction,finite element analysis,computational fluid dynamics,plasma physics,geophysics,computational physics,crystallographyandcomputational chemistry. It is a popular language forhigh-performance computing[5]and is used for programs that benchmark and rank the world'sfastest supercomputers.[6][7]
Fortran has evolved through numerous versions and dialects. In 1966, theAmerican National Standards Institute(ANSI) developed a standard for Fortran to limit proliferation of compilers using slightly different syntax.[8]Successive versions have added support for a character data type (Fortran 77),structured programming,array programming,modular programming,generic programming(Fortran 90),parallel computing(Fortran 95),object-oriented programming(Fortran 2003), andconcurrent programming(Fortran 2008).
Since April 2024, Fortran has ranked among the top ten languages in theTIOBE index, a measure of the popularity of programming languages.[9]
The first manual for FORTRAN describes it as aFormula Translating System, and printed the name withsmall caps,Fortran.[10]: p.2[11]Other sources suggest the name stands forFormula Translator,[12]orFormula Translation.[13]
Early IBM computers did not supportlowercaseletters, and the names of versions of the language through FORTRAN 77 were usually spelled in all-uppercase.[14]FORTRAN 77 was the last version in which the Fortran character set included only uppercase letters.[15]
The official languagestandardsfor Fortran have referred to the language as "Fortran" with initial caps since Fortran 90.[citation needed]
In late 1953,John W. Backussubmitted a proposal to his superiors atIBMto develop a more practical alternative toassembly languagefor programming theirIBM 704mainframe computer.[11]: 69Backus' historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan,Roy Nutt, Robert Nelson, Irving Ziller, Harold Stern,Lois Haibt, andDavid Sayre.[16]Its concepts included easier entry of equations into a computer, an idea developed byJ. Halcombe Laningand demonstrated in theLaning and Zierler systemof 1952.[17]
A draft specification forThe IBM Mathematical Formula Translating Systemwas completed by November 1954.[11]: 71The first manual for FORTRAN appeared in October 1956,[10][11]: 72with the first FORTRANcompilerdelivered in April 1957.[11]: 75Fortran produced efficient enough code forassembly languageprogrammers to accept ahigh-level programming languagereplacement.[18]
John Backus said during a 1979 interview withThink, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, and so, when I was working on theIBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."[19]
The language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of acomplex number data typein the language made Fortran especially suited to technical applications such as electrical engineering.[20]
By 1960, versions of FORTRAN were available for theIBM 709,650,1620, and7090computers. Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed.
FORTRAN was provided for theIBM 1401computer by an innovative 63-phase compiler that ran entirely in itscore memoryof only 8000 (six-bit) characters. The compiler could be run from tape, or from a 2200-card deck; it used no further tape or disk storage. It kept the program in memory and loadedoverlaysthat gradually transformed it, in place, into executable form, as described by Haines.[21]This article was reprinted, edited, in both editions ofAnatomy of a Compiler[22]and in the IBM manual "Fortran Specifications and Operating Procedures, IBM 1401".[23]The executable form was not entirelymachine language; rather, floating-point arithmetic, sub-scripting, input/output, and function references were interpreted, precedingUCSD PascalP-codeby two decades.GOTRAN, a simplified, interpreted version of FORTRAN I (with only 12 types of statements not 32) for "load and go" operation was available (at least for the earlyIBM 1620computer).[24]Modern Fortran, and almost all later versions, are fully compiled, as done for other high-performance languages.
The development of Fortran paralleled theearly evolution of compiler technology, and many advances in the theory and design ofcompilerswere specifically motivated by the need to generate efficient code for Fortran programs.
The initial release of FORTRAN for the IBM 704[10]contained 32 types ofstatements, including:
The arithmeticIFstatement was reminiscent of (but not readily implementable by) a three-way comparison instruction (CAS—Compare Accumulator with Storage) available on the 704. The statement provided the only way to compare numbers—by testing their difference, with an attendant risk of overflow. This deficiency was later overcome by "logical" facilities introduced in FORTRAN IV.
TheFREQUENCYstatement was used originally (and optionally) to give branch probabilities for the three branch cases of the arithmeticIFstatement. It could also be used to suggest how many iterations aDOloop might run. The first FORTRAN compiler used this weighting to performat compile timeaMonte Carlo simulationof the generated code, the results of which were used to optimize the placement of basic blocks in memory—a very sophisticated optimization for its time. The Monte Carlo technique is documented in Backus et al.'s paper on this original implementation,The FORTRAN Automatic Coding System:
The fundamental unit of program is thebasic block; a basic block is a stretch of program which has one entry point and one exit point. The purpose of section 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by running the program once in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type statements and computed GO TO's is determined by a random number generator suitably weighted according to whatever FREQUENCY statements have been provided.[16]
The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and outputting an error code on its console. That code could be looked up by the programmer in an error messages table in the operator's manual, providing them with a brief description of the problem.[10]: p.19–20[25]Later, an error-handling subroutine to handle user errors such as division by zero, developed by NASA,[26]was incorporated, informing users of which line of code contained the error.
Before the development of disk files, text editors and terminals, programs were most often entered on akeypunchkeyboard onto 80-columnpunched cards, one line to a card. The resulting deck of cards would be fed into a card reader to be compiled. Punched card codes included no lower-case letters or many special characters, and special versions of the IBM 026keypunchwere offered that would correctly print the re-purposed special characters used in FORTRAN.
Reflecting punched card input practice, Fortran programs were originally written in a fixed-column format, with the first 72 columns read into twelve 36-bit words.
A letter "C" in column 1 caused the entire card to be treated as a comment and ignored by the compiler. Otherwise, the columns of the card were divided into four fields:
Columns 73 to 80 could therefore be used for identification information, such as punching a sequence number or text, which could be used to re-order cards if a stack of cards was dropped; though in practice this was reserved for stable, production programs. AnIBM 519could be used to copy a program deck and add sequence numbers. Some early compilers, e.g., the IBM 650's, had additional restrictions due to limitations on their card readers.[28]Keypunchescould be programmed to tab to column 7 and skip out after column 72. Later compilers relaxed most fixed-format restrictions, and the requirement was eliminated in the Fortran 90 standard.
Within the statement field,whitespace characters(blanks) were ignored outside a text literal. This allowed omitting spaces between tokens for brevity or including spaces within identifiers for clarity. For example,AVG OF Xwas a valid identifier, equivalent toAVGOFX, and101010DO101I=1,101was a valid statement, equivalent to10101DO101I=1,101because the zero in column 6 is treated as if it were a space (!), while101010DO101I=1.101was instead10101DO101I=1.101, the assignment of 1.101 to a variable calledDO101I. Note the slight visual difference between a comma and a period.
Hollerith strings, originally allowed only in FORMAT and DATA statements, were prefixed by a character count and the letter H (e.g.,26HTHIS IS ALPHANUMERIC DATA.), allowing blanks to be retained within the character string. Miscounts were a problem.
IBM'sFORTRAN IIappeared in 1958. The main enhancement was to supportprocedural programmingby allowing user-written subroutines and functions which returned values with parameters passed byreference. The COMMON statement provided a way for subroutines to access common (orglobal) variables. Six new statements were introduced:[29]
Over the next few years, FORTRAN II added support for theDOUBLE PRECISIONandCOMPLEXdata types.
Early FORTRAN compilers supported norecursionin subroutines. Early computer architectures supported no concept of a stack, and when they did directly support subroutine calls, the return location was often stored in one fixed location adjacent to the subroutine code (e.g. theIBM 1130) or a specific machine register (IBM 360et seq), which only allows recursion if a stack is maintained by software and the return address is stored on the stack before the call is made and restored after the call returns. Although not specified in FORTRAN 77, many F77 compilers supported recursion as an option, and theBurroughs mainframes, designed with recursion built-in, did so by default. It became a standard in Fortran 90 via the new keyword RECURSIVE.[30]
This program, forHeron's formula, reads data on a tape reel containing three 5-digit integers A, B, and C as input. There are no "type" declarations available: variables whose name starts with I, J, K, L, M, or N are "fixed-point" (i.e. integers), otherwise floating-point. Since integers are to be processed in this example, the names of the variables start with the letter "I". The name of a variable must start with a letter and can continue with both letters and digits, up to a limit of six characters in FORTRAN II. If A, B, and C cannot represent the sides of a triangle in plane geometry, then the program's execution will end with an error code of "STOP 1". Otherwise, an output line will be printed showing the input values for A, B, and C, followed by the computed AREA of the triangle as a floating-point number occupying ten spaces along the line of output and showing 2 digits after the decimal point, the .2 in F10.2 of the FORMAT statement with label 601.
IBM also developed aFORTRAN IIIin 1958 that allowed forinline assemblycode among other features; however, this version was never released as a product. Like the 704 FORTRAN and FORTRAN II, FORTRAN III included machine-dependent features that made code written in it unportable from machine to machine, as well as Boolean expression support.[11]: 76Early versions of FORTRAN provided by other vendors suffered from the same disadvantage.
IBM began development ofFORTRAN IVin 1961 as a result of customer demands. FORTRAN IV removed the machine-dependent features of FORTRAN II (such asREAD INPUT TAPE), while adding new features such as aLOGICALdata type, logicalBoolean expressions, and thelogical IF statementas an alternative to thearithmetic IF statement.Type declarations were added, along with anIMPLICITstatement to override earlier conventions that variables areINTEGERif their name begins withI,J,K,L,M, orN; andREALotherwise.[31]: pp.70, 71[32]: p.6-9
FORTRAN IV was eventually released in 1962, first for theIBM 7030("Stretch") computer, followed by versions for theIBM 7090,IBM 7094, and later for theIBM 1401in 1966.[33]
By 1965, FORTRAN IV was supposed to be compliant with thestandardbeing developed by theAmerican Standards AssociationX3.4.3 FORTRAN Working Group.[34]
Between 1966 and 1968, IBM offered several FORTRAN IV compilers for itsSystem/360, each named by letters that indicated the minimum amount of memory the compiler needed to run.[35]The letters (F, G, H) matched the codes used with System/360 model numbers to indicate memory size, each letter increment being a factor of two larger:[36]: p. 5
Digital Equipment Corporationmaintained DECSYSTEM-10 Fortran IV (F40) forPDP-10from 1967 to 1975.[32]Compilers were also available for theUNIVAC 1100 seriesand theControl Data6000 seriesand7000 seriessystems.[37]
At about this time FORTRAN IV had started to become an important educational tool and implementations such as the University of Waterloo's WATFOR andWATFIVwere created to simplify the complex compile and link processes of earlier compilers.
In the FORTRAN IV programming environment of the era, except for that used on Control Data Corporation (CDC) systems, only one instruction was placed per line. The CDC version allowed for multiple instructions per line if separated by a$(dollar) character. The FORTRANsheetwas divided into four fields, as described above.
Two compilers of the time, IBM "G" and UNIVAC, allowed comments to be written on the same line as instructions, separated by a special character: "master space": V (perforations 7 and 8) for UNIVAC and perforations 12/11/0/7/8/9 (hexadecimal FF) for IBM. These comments were not to be inserted in the middle of continuation cards.[32][37]
Perhaps the most significant development in the early history of FORTRAN was the decision by theAmerican Standards Association(nowAmerican National Standards Institute(ANSI)) to form a committee sponsored by theBusiness Equipment Manufacturers Association(BEMA) to develop anAmerican Standard Fortran. The resulting two standards, approved in March 1966, defined two languages,FORTRAN(based on FORTRAN IV, which had served as a de facto standard), andBasic FORTRAN(based on FORTRAN II, but stripped of its machine-dependent features). The FORTRAN defined by the first standard, officially denoted X3.9-1966, became known asFORTRAN 66(although many continued to term it FORTRAN IV, the language on which the standard was largely based). FORTRAN 66 effectively became the first industry-standard version of FORTRAN. FORTRAN 66 included:
The above Fortran II version of the Heron program needs several modifications to compile as a Fortran 66 program. Modifications include using the more machine independent versions of theREADandWRITEstatements, and removal of the unneededFLOATFtype conversion functions. Though not required, the arithmeticIFstatements can be re-written to use logicalIFstatements and expressions in a more structured fashion.
After the release of the FORTRAN 66 standard, compiler vendors introduced several extensions toStandard Fortran, prompting ANSI committee X3J3 in 1969 to begin work on revising the 1966 standard, under sponsorship ofCBEMA, the Computer Business Equipment Manufacturers Association (formerly BEMA). Final drafts of this revised standard circulated in 1977, leading to formal approval of the new FORTRAN standard in April 1978. The new standard, calledFORTRAN 77and officially denoted X3.9-1978, added a number of significant features to address many of the shortcomings of FORTRAN 66:
In this revision of the standard, a number of features were removed or altered in a manner that might invalidate formerly standard-conforming programs.
(Removal was the only allowable alternative to X3J3 at that time, since the concept of "deprecation" was not yet available for ANSI standards.)
While most of the 24 items in the conflict list (see Appendix A2 of X3.9-1978) addressed loopholes or pathological cases permitted by the prior standard but rarely used, a small number of specific capabilities were deliberately removed, such as:
A Fortran 77 version of the Heron program requires no modifications to the Fortran 66 version. However this example demonstrates additional cleanup of the I/O statements, including using list-directed I/O, and replacing the Hollerith edit descriptors in theFORMATstatements with quoted strings. It also uses structuredIFandEND IFstatements, rather thanGOTO/CONTINUE.
The development of a revised standard to succeed FORTRAN 77 would be repeatedly delayed as the standardization process struggled to keep up with rapid changes in computing and programming practice. In the meantime, as the "Standard FORTRAN" for nearly fifteen years, FORTRAN 77 would become the historically most important dialect.
An important practical extension to FORTRAN 77 was the release of MIL-STD-1753 in 1978.[38]This specification, developed by theU.S. Department of Defense, standardized a number of features implemented by most FORTRAN 77 compilers but not included in the ANSI FORTRAN 77 standard. These features would eventually be incorporated into the Fortran 90 standard.
TheIEEE1003.9POSIXStandard, released in 1991, provided a simple means for FORTRAN 77 programmers to issue POSIX system calls.[39]Over 100 calls were defined in the document – allowing access to POSIX-compatible process control, signal handling, file system control, device control, procedure pointing, and stream I/O in a portable manner.
The much-delayed successor to FORTRAN 77, informally known asFortran 90(and prior to that,Fortran 8X), was finally released as ISO/IEC standard 1539:1991 in 1991 and an ANSI Standard in 1992. In addition to changing the official spelling from FORTRAN to Fortran, this major revision added many new features to reflect the significant changes in programming practice that had evolved since the 1978 standard:
Unlike the prior revision, Fortran 90 removed no features.[40]Any standard-conforming FORTRAN 77 program was also standard-conforming under Fortran 90, and either standard should have been usable to define its behavior.
A small set of features were identified as "obsolescent" and were expected to be removed in a future standard. All of the functionalities of these early-version features can be performed by newer Fortran features. Some are kept to simplify porting of old programs but many were deleted in Fortran 95.
Fortran 95, published officially as ISO/IEC 1539-1:1997, was a minor revision, mostly to resolve some outstanding issues from the Fortran 90 standard. Nevertheless, Fortran 95 also added a number of extensions, notably from theHigh Performance Fortranspecification:
A number of intrinsic functions were extended (for example adimargument was added to themaxlocintrinsic).
Several features noted in Fortran 90 to be "obsolescent" were removed from Fortran 95:
An important supplement to Fortran 95 was theISO technical reportTR-15581: Enhanced Data Type Facilities, informally known as theAllocatable TR.This specification defined enhanced use ofALLOCATABLEarrays, prior to the availability of fully Fortran 2003-compliant Fortran compilers. Such uses includeALLOCATABLEarrays as derived type components, in procedure dummy argument lists, and as function return values. (ALLOCATABLEarrays are preferable toPOINTER-based arrays becauseALLOCATABLEarrays are guaranteed by Fortran 95 to be deallocated automatically when they go out of scope, eliminating the possibility ofmemory leakage. In addition, elements of allocatable arrays are contiguous, andaliasingis not an issue for optimization of array references, allowing compilers to generate faster code than in the case of pointers.[41])
Another important supplement to Fortran 95 was theISOtechnical reportTR-15580: Floating-point exception handling, informally known as theIEEE TR.This specification defined support forIEEE floating-point arithmeticandfloating-pointexception handling.
In addition to the mandatory "Base language" (defined in ISO/IEC 1539-1 : 1997), the Fortran 95 language also included two optional modules:
which, together, compose the multi-part International Standard (ISO/IEC 1539).
According to the standards developers, "the optional parts describe self-contained features which have been requested by a substantial body of users and/or implementors, but which are not deemed to be of sufficient generality for them to be required in all standard-conforming Fortran compilers." Nevertheless, if a standard-conforming Fortran does provide such options, then they "must be provided in accordance with the description of those facilities in the appropriate Part of the Standard".
The language defined by the twenty-first century standards, in particular because of its incorporation ofobject-oriented programmingsupport and subsequentlyCoarray Fortran, is often referred to as 'Modern Fortran', and the term is increasingly used in the literature.[42]
Fortran 2003,officially published as ISO/IEC 1539-1:2004, was a major revision introducing many new features.[43]A comprehensive summary of the new features of Fortran 2003 is available at the Fortran Working Group (ISO/IEC JTC1/SC22/WG5) official Web site.[44]
From that article, the major enhancements for this revision include:
An important supplement to Fortran 2003 was theISO technical reportTR-19767: Enhanced module facilities in Fortran.This report providedsub-modules,which make Fortran modules more similar toModula-2modules. They are similar toAdaprivate child sub-units. This allows the specification and implementation of a module to be expressed in separate program units, which improves packaging of large libraries, allows preservation of trade secrets while publishing definitive interfaces, and prevents compilation cascades.
ISO/IEC 1539-1:2010, informally known as Fortran 2008, was approved in September 2010.[45][46]As with Fortran 95, this is a minor upgrade, incorporating clarifications and corrections to Fortran 2003, as well as introducing some new capabilities. The new capabilities include:
The Final Draft international Standard (FDIS) is available as document N1830.[47]
A supplement to Fortran 2008 is theInternational Organization for Standardization(ISO) Technical Specification (TS) 29113 onFurther Interoperability of Fortran with C,[48][49]which has been submitted to ISO in May 2012 for approval. The specification adds support for accessing the array descriptor from C and allows ignoring the type and rank of arguments.
The Fortran 2018 revision of the language was earlier referred to as Fortran 2015.[50]It was a significant revision and was released on November 28, 2018.[51]
Fortran 2018 incorporates two previously published Technical Specifications:
Additional changes and new features include support for ISO/IEC/IEEE 60559:2011 (the version of theIEEE floating-point standardbefore the latest minor revision IEEE 754–2019), hexadecimal input/output, IMPLICIT NONE enhancements and other changes.[54][55][56][57]
Fortran 2018 deleted the arithmetic IF statement. It also deleted non-block DO constructs - loops which do not end with an END DO or CONTINUE statement. These had been an obsolescent part of the language since Fortran 90.
New obsolescences are: COMMON and EQUIVALENCE statements and the BLOCK DATA program unit, labelled DO loops, specific names for intrinsic functions, and the FORALL statement and construct.
Fortran 2023 (ISO/IEC 1539-1:2023) was published in November 2023, and can be purchased from the ISO.[58]Fortran 2023 is a minor extension of Fortran 2018 that focuses on correcting errors and omissions
in Fortran 2018. It also adds some small features, including anenumerated typecapability.
A full description of the Fortran language features brought by Fortran 95 is covered in the related article,Fortran 95 language features. The language versions defined by later standards are often referred to collectively as 'Modern Fortran' and are described in the literature.
Although a 1968 journal article by the authors ofBASICalready described FORTRAN as "old-fashioned",[59]programs have been written in Fortran for many decades and there is a vast body of Fortran software in daily use throughout the scientific and engineering communities.[60]Jay Pasachoffwrote in 1984 that "physics and astronomy students simply have to learn FORTRAN. So much exists in FORTRAN that it seems unlikely that scientists will change toPascal,Modula-2, or whatever."[61]In 1993,Cecil E. Leithcalled FORTRAN the "mother tongue of scientific computing", adding that its replacement by any other possible language "may remain a forlorn hope".[62]
It is the primary language for some of the most intensivesuper-computingtasks, such as inastronomy,climate modeling,computational chemistry,computational economics,computational fluid dynamics,computational physics, data analysis,[63]hydrological modeling, numerical linear algebra and numerical libraries (LAPACK,IMSLandNAG),optimization, satellite simulation,structural engineering, andweather prediction.[64]Many of the floating-point benchmarks to gauge the performance of new computer processors, such as the floating-point components of theSPECbenchmarks (e.g.,CFP2006,CFP2017) are written in Fortran. Math algorithms are well documented inNumerical Recipes.
Apart from this, more modern codes in computational science generally use large program libraries, such asMETISfor graph partitioning,PETScorTrilinosfor linear algebra capabilities,deal.IIorFEniCSfor mesh and finite element support, and other generic libraries. Since the early 2000s, many of the widely used support libraries have also been implemented inCand more recently, inC++. On the other hand, high-level languages such as theWolfram Language,MATLAB,Python, andRhave become popular in particular areas of computational science. Consequently, a growing fraction of scientific programs are also written in such higher-level scripting languages. For this reason,facilities for inter-operation with Cwere added to Fortran 2003 and enhanced by the ISO/IEC technical specification 29113, which was incorporated into Fortran 2018 to allow more flexible interoperation with other programming languages.
Portabilitywas a problem in the early days because there was no agreed upon standard—not even IBM's reference manual—and computer companies vied to differentiate their offerings from others by providing incompatible features. Standards have improved portability. The 1966 standard provided a referencesyntaxand semantics, but vendors continued to provide incompatible extensions. Although careful programmers were coming to realize that use of incompatible extensions caused expensive portability problems, and were therefore using programs such asThe PFORT Verifier,[65][66]it was not until after the 1977 standard, when the National Bureau of Standards (nowNIST) publishedFIPS PUB 69, that processors purchased by the U.S. Government were required to diagnose extensions of the standard. Rather than offer two processors, essentially every compiler eventually had at least an option to diagnose extensions.[67][68]
Incompatible extensions were not the only portability problem. For numerical calculations, it is important to take account of the characteristics of the arithmetic. This was addressed by Fox et al. in the context of the 1966 standard by thePORTlibrary.[66]The ideas therein became widely used, and were eventually incorporated into the 1990 standard by way of intrinsic inquiry functions. The widespread (now almost universal) adoption of theIEEE 754standard for binary floating-point arithmetic has essentially removed this problem.
Access to the computing environment (e.g., the program's command line, environment variables, textual explanation of error conditions) remained a problem until it was addressed by the 2003 standard.
Large collections of library software that could be described as being loosely related to engineering and scientific calculations, such as graphics libraries, have been written in C, and therefore access to them presented a portability problem. This has been addressed by incorporation of C interoperability into the 2003 standard.
It is now possible (and relatively easy) to write an entirely portable program in Fortran, even without recourse to apreprocessor.
Until the Fortran 66 standard was developed, each compiler supported its own variant of Fortran. Some were more divergent from the mainstream than others.
The first Fortran compiler set a high standard of efficiency for compiled code. This goal made it difficult to create a compiler so it was usually done by the computer manufacturers to support hardware sales. This left an important niche: compilers that were fast and provided good diagnostics for the programmer (often a student). Examples include Watfor, Watfiv, PUFFT, and on a smaller scale, FORGO, Wits Fortran, and Kingston Fortran 2.
Fortran 5was marketed byData GeneralCorp from the early 1970s to the early 1980s, for theNova,Eclipse, andMVline of computers. It had an optimizing compiler that was quite good for minicomputers of its time. The language most closely resembles FORTRAN 66.
FORTRAN Vwas distributed byControl Data Corporationin 1968 for theCDC 6600series. The language was based upon FORTRAN IV.[69]
Univac also offered a compiler for the 1100 series known as FORTRAN V. A spinoff of Univac Fortran V was Athena FORTRAN.
Specific variantsproduced by the vendors of high-performance scientific computers (e.g.,Burroughs,Control Data Corporation(CDC),Cray,Honeywell,IBM,Texas Instruments, andUNIVAC) added extensions to Fortran to take advantage of special hardware features such asinstruction cache, CPUpipelines, and vector arrays. For example, one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered themachine codeinstructionsto keep multiple internal arithmetic units busy simultaneously. Another example isCFD, a special variant of FORTRAN designed specifically for theILLIAC IVsupercomputer, running atNASA'sAmes Research Center.
IBM Research Labs also developed an extended FORTRAN-based language calledVECTRANfor processing vectors and matrices.
Object-Oriented Fortranwas an object-oriented extension of Fortran, in which data items can be grouped into objects, which can be instantiated and executed in parallel. It was available forSolaris,IRIX,NeXTSTEP,iPSC, and nCUBE, but is no longer supported.
Such machine-specific extensions have either disappeared over time or have had elements incorporated into the main standards. The major remaining extension isOpenMP, which is a cross-platform extension for shared memory programming. One new extension, Coarray Fortran, is intended to support parallel programming.
FOR TRANSITwas the name of a reduced version of the IBM 704 FORTRAN language, which was implemented for the IBM 650, using a translator program developed at Carnegie in the late 1950s.[70]The following comment appears in the IBM Reference Manual (FOR TRANSIT Automatic Coding SystemC28-4038, Copyright 1957, 1959 by IBM):
The FORTRAN system was designed for a more complex machine than the 650, and consequently some of the 32 statements found in the FORTRAN Programmer's Reference Manual are not acceptable to the FOR TRANSIT system. In addition, certain restrictions to the FORTRAN language have been added. However, none of these restrictions make a source program written for FOR TRANSIT incompatible with the FORTRAN system for the 704.
The permissible statements were:
Up to ten subroutines could be used in one program.
FOR TRANSIT statements were limited to columns 7 through 56, only. Punched cards were used for input and output on the IBM 650. Three passes were required to translate source code to the "IT" language, then to compile the IT statements into SOAP assembly language, and finally to produce the object program, which could then be loaded into the machine to run the program (using punched cards for data input, and outputting results onto punched cards).
Two versions existed for the 650s with a 2000 word memory drum: FOR TRANSIT I (S) and FOR TRANSIT II, the latter for machines equipped with indexing registers and automatic floating-point decimal (bi-quinary) arithmetic. Appendix A of the manual included wiring diagrams for theIBM 533card reader/punchcontrol panel.
Prior to FORTRAN 77, manypreprocessorswere commonly used to provide a friendlier language, with the advantage that the preprocessed code could be compiled on any machine with a standard FORTRAN compiler.[71]These preprocessors would typically supportstructured programming, variable names longer than six characters, additional data types,conditional compilation, and evenmacrocapabilities. Popular preprocessors includedEFL,FLECS,iftran,MORTRAN,SFtran,S-Fortran,Ratfor, andRatfiv. EFL, Ratfor and Ratfiv, for example, implementedC-like languages, outputting preprocessed code in standard FORTRAN 66. ThePFORTpreprocessor was often used to verify that code conformed to a portable subset of the language. Despite advances in the Fortran language, preprocessors continue to be used for conditional compilation and macro substitution.
One of the earliest versions of FORTRAN, introduced in the '60s, was popularly used in colleges and universities. Developed, supported, and distributed by theUniversity of Waterloo,WATFORwas based largely on FORTRAN IV. A student using WATFOR could submit their batch FORTRAN job and, if there were no syntax errors, the program would move straight to execution. This simplification allowed students to concentrate on their program's syntax and semantics, or execution logic flow, rather than dealing with submissionJob Control Language(JCL), the compile/link-edit/execution successive process(es), or other complexities of the mainframe/minicomputer environment. A down side to this simplified environment was that WATFOR was not a good choice for programmers needing the expanded abilities of their host processor(s), e.g., WATFOR typically had very limited access to I/O devices. WATFOR was succeeded byWATFIVand its later versions.
(line programming)
LRLTRANwas developed at theLawrence Radiation Laboratoryto provide support for vector arithmetic and dynamic storage, among other extensions to support systems programming. The distribution included theLivermore Time Sharing System(LTSS) operating system.
The Fortran-95 Standard includes an optionalPart 3which defines an optionalconditional compilationcapability. This capability is often referred to as "CoCo".
Many Fortran compilers have integrated subsets of theC preprocessorinto their systems.
SIMSCRIPTis an application specific Fortran preprocessor for modeling and simulating large discrete systems.
TheF programming languagewas designed to be a clean subset of Fortran 95 that attempted to remove the redundant, unstructured, and deprecated features of Fortran, such as theEQUIVALENCEstatement. F retains the array features added in Fortran 90, and removes control statements that were made obsolete by structured programming constructs added to both FORTRAN 77 and Fortran 90. F is described by its creators as "a compiled, structured, array programming language especially well suited to education and scientific computing".[72]Essential Lahey Fortran 90 (ELF90) was a similar subset.
Lahey and Fujitsu teamed up to create Fortran for the Microsoft.NET Framework.[73]Silverfrost FTN95 is also capable of creating .NET code.[74]
The following program illustrates dynamic memory allocation and array-based operations, two features introduced with Fortran 90. Particularly noteworthy is the absence ofDOloops andIF/THENstatements in manipulating the array; mathematical operations are applied to the array as a whole. Also apparent is the use of descriptive variable names and general code formatting that conform with contemporary programming style. This example computes an average over data entered interactively.
During the same FORTRAN standards committee meeting at which the name "FORTRAN 77" was chosen, a satirical technical proposal was incorporated into the official distribution bearing the title "Letter OConsidered Harmful". This proposal purported to address the confusion that sometimes arises between the letter "O" and the numeral zero, by eliminating the letter from allowable variable names. However, the method proposed was to eliminate the letter from the character set entirely (thereby retaining 48 as the number of lexical characters, which the colon had increased to 49). This was considered beneficial in that it would promote structured programming, by making it impossible to use the notoriousGO TOstatement as before. (TroublesomeFORMATstatements would also be eliminated.) It was noted that this "might invalidate some existing programs" but that most of these "probably were non-conforming, anyway".[75][unreliable source?][76]
When X3J3 debated whether the minimum trip count for a DO loop should be zero or one in Fortran 77, Loren Meissner suggested a minimum trip count of two—reasoning(tongue-in-cheek)that if it were less than two, then there would be no reason for a loop.
When assumed-length arrays were being added, there was a dispute as to the appropriate character to separate upper and lower bounds. In a comment examining these arguments, Walt Brainerd penned an article entitled "Astronomy vs. Gastroenterology" because some proponents had suggested using the star or asterisk ("*"), while others favored the colon (":").[citation needed]
Variable names beginning with the letters I–N have a default type of integer, while variables starting with any other letters defaulted to real, although programmers could override the defaults with an explicit declaration.[77]This led to the joke: "In FORTRAN, GOD is REAL (unless declared INTEGER)."
|
https://en.wikipedia.org/wiki/Fortran
|
Indistributed computing, asingle system image(SSI) cluster is aclusterof machines that appears to be one single system.[1][2][3]The concept is often considered synonymous with that of adistributed operating system,[4][5]but a single image may be presented for more limited purposes, justjob schedulingfor instance, which may be achieved by means of an additional layer of software over conventionaloperating system imagesrunning on eachnode.[6]The interest in SSI clusters is based on the perception that they may be simpler to use and administer than more specialized clusters.
Different SSI systems may provide a more or less completeillusionof a single system.
Different SSI systems may, depending on their intended usage, provide some subset of these features.
Many SSI systems provideprocess migration.[7]Processes may start on onenodeand be moved to another node, possibly forresource balancingor administrative reasons.[note 1]As processes are moved from one node to another, other associated resources (for exampleIPCresources) may be moved with them.
Some SSI systems allowcheckpointingof running processes, allowing their current state to be saved and reloaded at a later date.[note 2]Checkpointing can be seen as related to migration, as migrating a process from one node to another can be implemented by first checkpointing the process, then restarting it on another node. Alternatively checkpointing can be considered asmigration to disk.
Some SSI systems provide the illusion that all processes are running on the same machine - the process management tools (e.g. "ps", "kill" onUnixlike systems) operate on all processes in the cluster.
Most SSI systems provide a single view of the file system. This may be achieved by a simpleNFSserver, shared disk devices or even file replication.
The advantage of a single root view is that processes may be run on any available node and access needed files with no special precautions. If the cluster implements process migration a single root view enables direct accesses to the files from the node where the process is currently running.
Some SSI systems provide a way of "breaking the illusion", having some node-specific files even in a single root.HPTruClusterprovides a "context dependent symbolic link" (CDSL) which points to different files depending on the node that accesses it.HPVMSclusterprovides a search list logical name with node specific files occluding cluster shared files where necessary. This capability may be necessary to deal withheterogeneousclusters, where not all nodes have the same configuration. In more complex configurations such as multiple nodes of multiple architectures over multiple sites, several local disks may combine to form the logical single root.
Some SSI systems allow all nodes to access the I/O devices (e.g. tapes, disks, serial lines and so on) of other nodes. There may be some restrictions on the kinds of accesses allowed (For example,OpenSSIcan't mount disk devices from one node on another node).
Some SSI systems allow processes on different nodes to communicate usinginter-process communicationsmechanisms as if they were running on the same machine. On some SSI systems this can even includeshared memory(can be emulated in software withdistributed shared memory).
In most cases inter-node IPC will be slower than IPC on the same machine, possibly drastically slower for shared memory. Some SSI clusters include special hardware to reduce this slowdown.
Some SSI systems provide a "cluster IPaddress", a single address visible from outside the cluster that can be used to contact the cluster as if it were one machine. This can be used for load balancing inbound calls to the cluster, directing them to lightly loaded nodes, or for redundancy, moving the cluster address from one machine to another as nodes join or leave the cluster.[note 3]
Examples here vary from commercial platforms with scaling capabilities, to packages/frameworks for creating distributed systems, as well as those that actually implement a single system image.
|
https://en.wikipedia.org/wiki/Single_system_image
|
Hyper-threading(officially calledHyper-Threading TechnologyorHT Technologyand abbreviated asHTTorHT) isIntel'sproprietarysimultaneous multithreading(SMT) implementation used to improveparallelization of computations(doing multiple tasks at once) performed onx86microprocessors. It was introduced onXeonserverprocessorsin February 2002 and onPentium 4desktop processors in November 2002.[4]Since then, Intel has included this technology inItanium,Atom, andCore 'i' SeriesCPUs, among others.[5]
For eachprocessor corethat is physically present, theoperating systemaddresses two virtual (logical) cores and shares the workload between them when possible. The main function of hyper-threading is to increase the number of independent instructions in the pipeline; it takes advantage ofsuperscalararchitecture, in which multiple instructions operate on separate datain parallel. With HTT, one physical core appears as two processors to the operating system, allowingconcurrentscheduling of two processes per core. In addition, two or more processes can use the same resources: If resources for one process are not available, then another process can continue if its resources are available.
In addition to requiring simultaneous multithreading support in the operating system, hyper-threading can be properly utilized only with an operating system specifically optimized for it.[6]
Hyper-Threading Technology is a form of simultaneousmultithreadingtechnology introduced by Intel, while the concept behind the technology has been patented bySun Microsystems. Architecturally, a processor with Hyper-Threading Technology consists of two logical processors per core, each of which has its own processor architectural state. Each logical processor can be individually halted, interrupted or directed to execute a specified thread, independently from the other logical processor sharing the same physical core.[8]
Unlike a traditional dual-processor configuration that uses two separate physical processors, the logical processors in a hyper-threaded core share the execution resources. These resources include the execution engine, caches, and system bus interface; the sharing of resources allows two logical processors to work with each other more efficiently, and allows a logical processor to borrow resources from a stalled logical core (assuming both logical cores are associated with the same physical core). A processor stalls when it must wait for data it has requested, in order to finish processing the present thread. The degree of benefit seen when using a hyper-threaded, or multi-core, processor depends on the needs of the software, and how well it and the operating system are written to manage the processor efficiently.[8]
Hyper-threading works by duplicating certain sections of the processor—those that store thearchitectural state—but not duplicating the mainexecution resources. This allows a hyper-threading processor to appear as the usual "physical" processor plus an extra "logical" processor to the host operating system (HTT-unaware operating systems see two "physical" processors), allowing the operating system to schedule two threads or processes simultaneously and appropriately. When execution resources in a hyper-threaded processor are not in use by the current task, and especially when the processor is stalled, those execution resources can be used to execute another scheduled task. (The processor may stall due to acache miss,branch misprediction, ordata dependency.)[9]
This technology is transparent to operating systems and programs. The minimum that is required to take advantage of hyper-threading issymmetric multiprocessing(SMP) support in the operating system, since the logical processors appear no different to the operating system than physical processors.
It is possible to optimize operating system behavior on multi-processor, hyper-threading capable systems. For example, consider an SMP system with two physical processors that are both hyper-threaded (for a total of four logical processors). If the operating system's threadscheduleris unaware of hyper-threading, it will treat all four logical processors the same. If only two threads are eligible to run, it might choose to schedule those threads on the two logical processors that happen to belong to the same physical processor. That processor would be extremely busy, and would share execution resources, while the other processor would remain idle, leading to poorer performance than if the threads were scheduled on different physical processors. This problem can be avoided by improving the scheduler to treat logical processors differently from physical processors, which is, in a sense, a limited form of the scheduler changes required forNUMAsystems.
The first published paper describing what is now known as hyper-threading in a general purpose computer was written by Edward S. Davidson and Leonard. E. Shar in 1973.[10]
Denelcor, Inc.introducedmulti-threadingwith theHeterogeneous Element Processor(HEP) in 1982. The HEP pipeline could not hold multiple instructions from the same process. Only one instruction from a given process was allowed to be present in the pipeline at any point in time. Should an instruction from a given process block the pipe, instructions from other processes would continue after the pipeline drained.
US patent for the technology behind hyper-threading was granted to Kenneth Okin atSun Microsystemsin November 1994. At that time,CMOSprocess technology was not advanced enough to allow for a cost-effective implementation.[11]
Intel implemented hyper-threading on an x86 architecture processor in 2002 with the Foster MP-basedXeon. It was also included on the 3.06 GHz Northwood-based Pentium 4 in the same year, and then remained as a feature in every Pentium 4 HT, Pentium 4 Extreme Edition and Pentium Extreme Edition processor since. The Intel Core & Core 2 processor lines (2006) that succeeded the Pentium 4 model line didn't utilize hyper-threading. The processors based on theCore microarchitecturedid not have hyper-threading because the Core microarchitecture was a descendant of the olderP6 microarchitecture. The P6 microarchitecture was used in earlier iterations of Pentium processors, namely, thePentium Pro,Pentium IIandPentium III(plus theirCeleron&Xeonderivatives at the time).Windows 2000SP3 andWindows XP SP1have added support for hyper-threading.
Intel released theNehalem microarchitecture(Core i7) in November 2008, in which hyper-threading made a return. The first generation Nehalem processors contained four physical cores and effectively scaled to eight threads. Since then, both two- and six-core models have been released, scaling four and twelve threads respectively.[12]EarlierIntel Atomcores were in-order processors, sometimes with hyper-threading ability, for low power mobile PCs and low-price desktop PCs.[13]TheItanium9300 launched with eight threads per processor (two threads per core) through enhanced hyper-threading technology. The next model, the Itanium 9500 (Poulson), features a 12-wide issue architecture, with eight CPU cores with support for eight more virtual cores via hyper-threading.[14]The Intel Xeon 5500 server chips also utilize two-way hyper-threading.[15][16]
According to Intel, the first hyper-threading implementation used only 5% moredie areathan the comparable non-hyperthreaded processor, but the performance was 15–30% better.[17][18]Intel claims up to a 30% performance improvement compared with an otherwise identical, non-simultaneous multithreading Pentium 4.Tom's Hardwarestates: "In some cases a P4 running at 3.0 GHz with HT on can even beat a P4 running at 3.6 GHz with HT turned off."[19]Intel also claims significant performance improvements with a hyper-threading-enabled Pentium 4 processor in some artificial-intelligence algorithms.
Overall the performance history of hyper-threading was a mixed one in the beginning. As one commentary on high-performance computing from November 2002 notes:[20]
Hyper-Threading can improve the performance of someMPIapplications, but not all. Depending on the cluster configuration and, most importantly, the nature of the application running on the cluster, performance gains can vary or even be negative. The next step is to use performance tools to understand what areas contribute to performance gains and what areas contribute to performance degradation.
As a result, performance improvements are very application-dependent;[21]however, when running two programs that require full attention of the processor, it can actually seem like one or both of the programs slows down slightly when Hyper-Threading Technology is turned on.[22]This is due to thereplay systemof the Pentium 4 tying up valuable execution resources, equalizing the processor resources between the two programs, which adds a varying amount of execution time. The Pentium 4 "Prescott" and the Xeon "Nocona" processors received a replay queue that reduces execution time needed for the replay system and completely overcomes the performance penalty.[23]
According to a November 2009 analysis by Intel, performance impacts of hyper-threading result in increased overall latency in case the execution of threads does not result in significant overall throughput gains, which vary[21]by the application. In other words, overall processing latency is significantly increased due to hyper-threading, with the negative effects becoming smaller as there are more simultaneous threads that can effectively use the additional hardware resource utilization provided by hyper-threading.[24]A similar performance analysis is available for the effects of hyper-threading when used to handle tasks related to managing network traffic, such as for processinginterrupt requestsgenerated bynetwork interface controllers(NICs).[25]Another paper claims no performance improvements when hyper-threading is used for interrupt handling.[26]
When the first HT processors were released, many operating systems were not optimized for hyper-threading technology (e.g. Windows 2000 and Linux older than 2.4).[27]
In 2006, hyper-threading was criticised for energy inefficiency.[28]For example,ARM(a specialized, low-power, CPU design company), stated that simultaneous multithreading can use up to 46% more power than ordinary dual-core designs. Furthermore, they claimed that SMT increasescache thrashingby 42%, whereasdual coreresults in a 37% decrease.[29]
In 2010, ARM said it might include simultaneous multithreading in its future chips;[30]however, this was rejected in favor of their 2012 64-bit design.[31]ARM produced SMT cores in 2018.[32]
In 2013, Intel dropped SMT in favor ofout-of-order executionfor itsSilvermontprocessor cores, as they found this gave better performance with better power efficiency than a lower number of cores with SMT.[33]
In 2017, it was revealed that Intel'sSkylakeandKaby Lakeprocessors had a bug in their implementation of hyper-threading that could cause data loss.[34]Microcodeupdates were later released to address the issue.[35]
In 2019, withCoffee Lake, Intel temporarily moved away from including hyper-threading in mainstream Core i7 desktop processors except for highest-end Core i9 parts or Pentium Gold CPUs.[36]It also began to recommend disabling hyper-threading, asnew CPU vulnerabilityattacks were revealed which could be mitigated by disabling HT.[37]
In May 2005,Colin Percivaldemonstrated that a malicious thread on a Pentium 4 can use a timing-basedside-channel attackto monitor thememory access patternsof another thread with which it shares a cache, allowing the theft of cryptographic information. This is not actually atiming attack, as the malicious thread measures the time of only its own execution. Potential solutions to this include the processor changing its cache eviction strategy or the operating system preventing the simultaneous execution, on the same physical core, of threads with different privileges.[38]In 2018 theOpenBSDoperating system has disabled hyper-threading "in order to avoid data potentially leaking from applications to other software" caused by theForeshadow/L1TFvulnerabilities.[39][40]In 2019 aset of vulnerabilitiesled to security experts recommending the disabling of hyper-threading on all devices.[41]
|
https://en.wikipedia.org/wiki/Hyper-threading
|
Afunctional differential equationis adifferential equationwith deviating argument. That is, a functional differential equation is an equation that contains a function and some of its derivatives evaluated at different argument values.[1]
Functional differential equations find use in mathematical models that assume a specified behavior or phenomenon depends on the present as well as the past state of a system.[2]In other words, past events explicitly influence future results. For this reason, functional differential equations are more applicable thanordinary differential equations (ODE), in which future behavior only implicitly depends on the past.
Unlike ordinary differential equations, which contain a function of one variable and its derivatives evaluated with the same input, functional differential equations contain a function and its derivatives evaluated with different input values.
The simplest type of functional differential equation called theretarded functional differential equationorretarded differential difference equation, is of the form[3]
The simplest, fundamental functional differential equation is the linear first-order delay differential equation[4][unreliable source?]which is given by
whereα1,α2,τ{\displaystyle \alpha _{1},\alpha _{2},\tau }are constants,f(t){\displaystyle f(t)}is some continuous function, andx{\displaystyle x}is a scalar. Below is a table with a comparison of several ordinary and functional differential equations.
"Functional differential equation" is the general name for a number of more specific types of differential equations that are used in numerous applications. There are delay differential equations, integro-differential equations, and so on.
Differential difference equations are functional differential equations in which the argument values are discrete.[1]The general form for functional differential equations of finitely many discrete deviating arguments is
wherex(t)∈Rm,n1,n2,…,ni≥0,{\displaystyle x(t)\in \mathbb {R} ^{m},\,n_{1},n_{2},\ldots ,n_{i}\geq 0,}andτ1(t),τ2(t),…,τi(t)≥0{\displaystyle \tau _{1}(t),\tau _{2}(t),\ldots ,\tau _{i}(t)\geq 0}
Differential difference equations are also referred to asretarded,neutral,advanced, andmixedfunctional differential equations. This classification depends on whether the rate of change of the current state of the system depends on past values, future values, or both.[5]
Functional differential equations of retarded type occur whenmax{n1,n2,…,nk}<n{\displaystyle \max\{n_{1},n_{2},\ldots ,n_{k}\ \}<n}for the equation given above. In other words, this class of functional differential equations depends on the past and present values of the function with delays.
A simple example of a retarded functional differential equation is
whereas a more general form for discrete deviating arguments can be written as
Functional differential equations of neutral type, or neutral differential equations occur when
Neutral differential equations depend on past and present values of the function, similarly to retarded differential equations, except it also depends on derivatives with delays. In other words, retarded differential equations do not involve the given function's derivative with delays while neutral differential equations do.
Integro-differential equations of Volterra type are functional differential equations with continuous argument values.[1]Integro-differential equations involve both the integrals and derivatives of some function with respect to its argument.
The continuous integro-differential equation for retarded functional differential equations,x′(t)=f(t,x(t−τ1(t)),x(t−τ2(t)),…,x(t−τk(t))){\displaystyle x'(t)=f{\bigl (}t,x(t-\tau _{1}(t)),x(t-\tau _{2}(t)),\ldots ,x(t-\tau _{k}(t)){\bigr )}}, can be written as
Functional differential equations have been used in models that determine future behavior of a certain phenomenon determined by the present and the past. Future behavior of phenomena, described by the solutions of ODEs, assumes that behavior is independent of the past.[2]However, there can be many situations that depend on past behavior.
FDEs are applicable for models in multiple fields, such as medicine, mechanics, biology, and economics. FDEs have been used in research for heat-transfer, signal processing, evolution of a species, traffic flow and study of epidemics.[1][4]
Alogistic equationforpopulation growthis given bydxdt=ρx(t)(1−x(t)k),{\displaystyle {\mathrm {d} x \over \mathrm {d} t}=\rho \,x(t)\left(1-{\frac {x(t)}{k}}\right),}whereρis the reproduction rate andkis thecarrying capacity.x(t){\displaystyle x(t)}represents the population size at timet, andρ(1−x(t)k){\textstyle \rho \left(1-{\frac {x(t)}{k}}\right)}is the density-dependent reproduction rate.[7]
If we were to now apply this to an earlier timet−τ{\displaystyle t-\tau }, we getdxdt=ρx(t)(1−x(t−τ)k){\displaystyle {\mathrm {d} x \over \mathrm {d} t}=\rho \,x(t)\left(1-{\frac {x(t-\tau )}{k}}\right)}
Upon exposure to applications of ordinary differential equations, many come across the mixing model of some chemical solution.
Suppose there is a container holding liters of salt water. Salt water is flowing in, and out of the container at the same rater{\displaystyle r}of liters per second. In other words, the rate of water flowing in is equal to the rate of the salt water solution flowing out. LetV{\displaystyle V}be the amount in liters of salt water in the container andx(t){\displaystyle x(t)}be the uniform concentration in grams per liter of salt water at timet{\displaystyle t}. Then, we have the differential equation[8]x′(t)=−rVx(t),rV>0{\displaystyle x'(t)=-{\frac {r}{V}}x(t),{\frac {r}{V}}>0}
The problem with this equation is that it makes the assumption that every drop of water that enters the contain is instantaneously mixed into the solution. This can be eliminated by using a FDE instead of an ODE.
Letx(t){\displaystyle x(t)}be the average concentration at timet{\displaystyle t}, rather than uniform. Then, let's assume the solution leaving the container at timet{\displaystyle t}is equal tox(t−τ),τ>0{\displaystyle x(t-\tau ),\tau >0}, the average concentration at some earlier time. Then, the equation is a delay-differential equation of the form[8]x′(t)=−rVx(t−τ){\displaystyle x'(t)=-{\frac {r}{V}}x(t-\tau )}
The Lotka–Volterra predator-prey model was originally developed to observe the population of sharks and fish in the Adriatic Sea; however, this model has been used in many other fields for different uses, such as describing chemical reactions. Modelling predatory-prey population has always been widely researched, and as a result, there have been many different forms of the original equation.
One example, as shown by Xu, Wu (2013),[9]of the Lotka–Volterra model with time-delay is given below:p′(t)=p(t)[r1(t)−a11(t)p(t−τ11(t))−a12(t)P1(t−τ12(t))−a13(t)P2(t−τ13(t))]{\displaystyle p'(t)=p(t){\Biggl [}r_{1}(t)-a_{11}(t)p{\biggl (}t-\tau _{11}(t){\biggr )}-a_{12}(t)P_{1}{\biggl (}t-\tau _{12}(t){\biggr )}-a_{13}(t)P_{2}{\biggl (}t-\tau _{13}(t){\biggr )}{\Biggr ]}}P1′(t)=P1(t)[−r2(t)+a21(t)p(t−τ21(t))−a22(t)P1(t−τ22(t))−a23(t)P2(t−τ23(t))]{\displaystyle P_{1}'(t)=P_{1}(t){\Biggl [}-r_{2}(t)+a_{21}(t)p{\biggl (}t-\tau _{21}(t){\biggr )}-a_{22}(t)P_{1}{\biggl (}t-\tau _{22}(t){\biggr )}-a_{23}(t)P_{2}{\biggl (}t-\tau _{23}(t){\biggr )}{\Biggr ]}}P2′(t)=P2(t)[−r2(t)+a31(t)p(t−τ31(t))−a32(t)P1(t−τ32(t))−a33(t)P2(t−τ33(t))]{\displaystyle P_{2}'(t)=P_{2}(t){\Biggl [}-r_{2}(t)+a_{31}(t)p{\biggl (}t-\tau _{31}(t){\biggr )}-a_{32}(t)P_{1}{\biggl (}t-\tau _{32}(t){\biggr )}-a_{33}(t)P_{2}{\biggl (}t-\tau _{33}(t){\biggr )}{\Biggr ]}}wherep(t){\displaystyle p(t)}denotes the prey population density at time t,P1(t){\displaystyle P_{1}(t)}andP2(t){\displaystyle P_{2}(t)}denote the density of the predator population at timet,ri,aij∈C(R,[0,∞)){\displaystyle t,r_{i},a_{ij}\in C(\mathbb {R} ,[0,\infty ))}andτij∈C(R,R){\displaystyle \tau _{ij}\in C(\mathbb {R} ,\mathbb {R} )}
Examples of other models that have used FDEs, namely RFDEs, are given below:
|
https://en.wikipedia.org/wiki/Functional_differential_equation
|
In cryptography,subliminal channelsarecovert channelsthat can be used to communicate secretly in normal looking communication over aninsecure channel.[1]Subliminal channels indigital signaturecrypto systems were found in 1984 byGustavus Simmons.
Simmons describes how the "Prisoners' Problem" can be solved through parameter substitution indigital signaturealgorithms.[2][a]
Signature algorithms likeElGamalandDSAhave parameters which must be set with random information. He shows how one can make use of these parameters to send a message subliminally. Because the algorithm's signature creation procedure is unchanged, the signature remains verifiable and indistinguishable from a normal signature. Therefore, it is hard to detect if the subliminal channel is used.
The broadband and the narrow-band channels can use different algorithm parameters. A narrow-band channel cannot transport maximal information, but it can be used to send the authentication key or datastream.
Research is ongoing : further developments can enhance the subliminal channel, e.g., allow for establishing a broadband channel without the need to agree on an authentication key in advance. Other developments try to avoid the entire subliminal channel.
An easy example of a narrowband subliminal channel for normal human-language text would be to define that an even word count in a sentence is associated with the bit "0" and an odd word count with the bit "1". The question "Hello, how do you do?" would therefore send the subliminal message "1".
TheDigital Signature Algorithmhas one subliminal broadband[3]and three subliminal narrow-band channels[4]
At signing the parameterk{\displaystyle k}has to be set random. For the broadband channel this parameter is instead set with a subliminal messagem′{\displaystyle m'}.
The formula for message extraction is derived by transposing the signature values{\displaystyle s}calculation formula.
In this example, an RSA modulus purporting to be of the form n = pq is actually of the form n = pqr, for primes p, q, and r. Calculation shows that exactly one extra bit can be hidden in the digitally signed message. The cure for this was found by cryptologists at theCentrum Wiskunde & InformaticainAmsterdam, who developed aZero-knowledge proofthat n is of the form n = pq.[citation needed]This example was motivated in part byThe Empty Silo Proposal.
Here is a (real, working) PGP public key (using the RSA algorithm), which was generated to include two subliminal channels - the first is the "key ID", which should normally be random hex, but below is "covertly" modified to read "C0DED00D". The second is the base64 representation of the public key - again, supposed to be all random gibberish, but the English-readable message "//This+is+Christopher+Drakes+PGP+public+key//Who/What+is+watcHIng+you//" has been inserted. Adding both these subliminal messages was accomplished by tampering with the random number generation during the RSA key generation phase.
A modification to theBrickell and DeLaurentis signature schemeprovides a broadband channel without the necessity to share the authentication key.[5]TheNewton channelis not a subliminal channel, but it can be viewed as an enhancement.[6]
With the help of thezero-knowledge proofand thecommitment schemeit is possible to prevent the usage of the subliminal channel.[7][8]
It should be mentioned that this countermeasure has a 1-bit subliminal channel. The reason for that is the problem that a proof can succeed or purposely fail.[9]
Another countermeasure can detect, but not prevent, the subliminal usage of the randomness.[10]
|
https://en.wikipedia.org/wiki/Subliminal_channel
|
Abiordered set(otherwise known asboset) is amathematical objectthat occurs in the description of thestructureof the set ofidempotentsin asemigroup.
The set of idempotents in a semigroup is a biordered set and every biordered set is the set of idempotents of some semigroup.[1][2]A regular biordered set is a biordered set with an additional property. The set of idempotents in aregular semigroupis a regular biordered set, and every regular biordered set is the set of idempotents of some regular semigroup.[1]
The concept and the terminology were developed byK S S Nambooripadin the early 1970s.[3][4][1]In 2002, Patrick Jordan introduced the term boset as an abbreviation of biordered set.[5]The defining properties of a biordered set are expressed in terms of twoquasiordersdefined on the set and hence the name biordered set.
According to Mohan S. Putcha, "The axioms defining a biordered set are quite complicated. However, considering the general nature of semigroups, it is rather surprising that such a finite axiomatization is even possible."[6]Since the publication of the original definition of the biordered set by Nambooripad, several variations in the definition have been proposed. David Easdown simplified the definition and formulated the axioms in a special arrow notation invented by him.[7]
IfXandYaresetsandρ⊆X×Y, letρ(y) = {x∈X:xρy}.
LetEbe asetin which apartialbinary operation, indicated by juxtaposition, is defined. IfDEis thedomainof the partial binary operation onEthenDEis arelationonEand (e,f) is inDEif and only if the productefexists inE. The following relations can be defined inE:
IfTis anystatementaboutEinvolving the partial binary operation and the above relations inE, one can define the left-rightdualofTdenoted byT*. IfDEissymmetricthenT* is meaningful wheneverTis.
The setEis called a biordered set if the followingaxiomsand their duals hold for arbitrary elementse,f,g, etc. inE.
InM(e,f) =ωl(e) ∩ ωr(f)(theM-setofeandfin that order), define a relation≺{\displaystyle \prec }by
Then the set
is called thesandwich setofeandfin that order.
We say that a biordered setEis anM-biordered setifM(e,f) ≠ ∅ for alleandfinE.
Also,Eis called aregular biordered setifS(e,f) ≠ ∅ for alleandfinE.
In 2012 Roman S. Gigoń gave a simple proof thatM-biordered sets arise fromE-inversive semigroups.[8][clarification needed]
A subsetFof a biordered setEis a biordered subset (subboset) ofEifFis a biordered set under the partial binary operation inherited fromE.
For anyeinEthe setsωr(e),ωl(e)andω(e)are biordered subsets ofE.[1]
A mappingφ:E→Fbetween two biordered setsEandFis a biordered set homomorphism (also called a bimorphism) if for all (e,f) inDEwe have (eφ) (fφ) = (ef)φ.
LetVbe avector spaceand
whereV=A⊕Bmeans thatAandBaresubspacesofVandVis theinternal direct sumofAandB.
The partial binary operation ⋆ on E defined by
makesEa biordered set. The quasiorders inEare characterised as follows:
The setEof idempotents in a semigroupSbecomes a biordered set if a partial binary operation is defined inEas follows:efis defined inEif and only ifef=eoref=forfe=eorfe=fholds inS. IfSis a regular semigroup thenEis a regular biordered set.
As a concrete example, letSbe the semigroup of all mappings ofX= { 1, 2, 3 }into itself. Let the symbol (abc) denote the map for which1 →a, 2 →b,and3 →c. The setEof idempotents inScontains the following elements:
The following table (taking composition of mappings in the diagram order) describes the partial binary operation inE. AnXin a cell indicates that the corresponding multiplication is not defined.
|
https://en.wikipedia.org/wiki/Biordered_set
|
Inconstraint programmingandSAT solving,backjumping(also known asnon-chronological backtracking[1]orintelligent backtracking[2]) is an enhancement forbacktrackingalgorithmswhich reduces thesearch space. While backtracking always goes up one level in thesearch treewhen all values for a variable have been tested, backjumping may go up more levels. In this article, a fixed order of evaluation of variablesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}is used, but the same considerations apply to a dynamic order of evaluation.
Whenever backtracking has tried all values for a variable without finding any solution, it reconsiders the last of the previously assigned variables, changing its value or further backtracking if no other values are to be tried. Ifx1=a1,…,xk=ak{\displaystyle x_{1}=a_{1},\ldots ,x_{k}=a_{k}}is the current partial assignment and all values forxk+1{\displaystyle x_{k+1}}have been tried without finding a solution, backtracking concludes that no solution extendingx1=a1,…,xk=ak{\displaystyle x_{1}=a_{1},\ldots ,x_{k}=a_{k}}exists. The algorithm then "goes up" toxk{\displaystyle x_{k}}, changingxk{\displaystyle x_{k}}'s value if possible, backtracking again otherwise.
The partial assignment is not always necessary in full to prove that no value ofxk+1{\displaystyle x_{k+1}}leads to a solution. In particular, a prefix of the partial assignment may have the same property, that is, there exists an indexj<k{\displaystyle j<k}such thatx1,…,xj=a1,…,aj{\displaystyle x_{1},\ldots ,x_{j}=a_{1},\ldots ,a_{j}}cannot be extended to form a solution with whatever value forxk+1{\displaystyle x_{k+1}}. If the algorithm can prove this fact, it can directly consider a different value forxj{\displaystyle x_{j}}instead of reconsideringxk{\displaystyle x_{k}}as it would normally do.
The efficiency of a backjumping algorithm depends on how high it is able to backjump. Ideally, the algorithm could jump fromxk+1{\displaystyle x_{k+1}}to whichever variablexj{\displaystyle x_{j}}is such that the current assignment tox1,…,xj{\displaystyle x_{1},\ldots ,x_{j}}cannot be extended to form a solution with any value ofxk+1{\displaystyle x_{k+1}}. If this is the case,j{\displaystyle j}is called asafe jump.
Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, which is what the algorithm is trying to find. In practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe. These methods have different costs, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree.
The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching. Inconstraint satisfaction, a partial evaluation isconsistentif and only if it satisfies all constraints involving the assigned variables, andinconsistentotherwise. It might be the case that a consistent partial solution cannot be extended to a consistent complete solution because some of the unassigned variables may not be assigned without violating other constraints.
The condition in which all values of a given variablexk+1{\displaystyle x_{k+1}}are inconsistent with the current partial solutionx1,…,xk=a1,…,ak{\displaystyle x_{1},\ldots ,x_{k}=a_{1},\ldots ,a_{k}}is called aleaf dead end. This happens exactly when the variablexk+1{\displaystyle x_{k+1}}is a leaf of the search tree (which correspond to nodes having only leaves as children in the figures of this article.)
The backjumping algorithm by John Gaschnig does a backjump only in leaf dead ends.[3]In other words, it works differently from backtracking only when every possible value ofxk+1{\displaystyle x_{k+1}}has been tested and resulted inconsistent without the need of branching over another variable.
A safe jump can be found by simply evaluating, for every valueak+1{\displaystyle a_{k+1}}, the shortest prefix ofx1,…,xk=a1,…,ak{\displaystyle x_{1},\ldots ,x_{k}=a_{1},\ldots ,a_{k}}inconsistent withxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}. In other words, ifak+1{\displaystyle a_{k+1}}is a possible value forxk+1{\displaystyle x_{k+1}}, the algorithm checks the consistency of the following evaluations:
The smallest index (lowest the listing) for which evaluations are inconsistent would be a safe jump ifxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}were the only possible value forxk+1{\displaystyle x_{k+1}}. Since every variable can usually take more than one value, the maximal index that comes out from the check for each value is a safe jump, and is the point where John Gaschnig's algorithm jumps.
In practice, the algorithm can check the evaluations above at the same time it is checking the consistency ofxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}.
The previous algorithm only backjumps when the values of a variable can be shown inconsistent with the current partial solution without further branching. In other words, it allows for a backjump only at leaf nodes in the search tree.
An internal node of the search tree represents an assignment of a variable that is consistent with the previous ones. If no solution extends this assignment, the previous algorithm always backtracks: no backjump is done in this case.
Backjumping at internal nodes cannot be done as for leaf nodes. Indeed, if some evaluations ofxk+1{\displaystyle x_{k+1}}required branching, it is because they are consistent with the current assignment. As a result, searching for a prefix that is inconsistent with these values of the last variable does not succeed.
In such cases, what proved an evaluationxk+1=ak+1{\displaystyle x_{k+1}=a_{k+1}}not to be part of a solution with the current partial evaluationx1,…,xk{\displaystyle x_{1},\ldots ,x_{k}}is therecursivesearch. In particular, the algorithm "knows" that no solution exists from this point on because it comes back to this node instead of stopping after having found a solution.
This return is due to a number ofdead ends, points where the algorithm has proved a partial solution inconsistent. In order to further backjump, the algorithm has to take into account that the impossibility of finding solutions is due to these dead ends. In particular, the safe jumps are indexes of prefixes that still make these dead ends to be inconsistent partial solutions.
In other words, when all values ofxk+1{\displaystyle x_{k+1}}have been tried, the algorithm can backjump to a previous variablexi{\displaystyle x_{i}}provided that the current truth evaluation ofx1,…,xi{\displaystyle x_{1},\ldots ,x_{i}}is inconsistent with all the truth evaluations ofxk+1,xk+2,...{\displaystyle x_{k+1},x_{k+2},...}in the leaf nodes that are descendants of the nodexk+1{\displaystyle x_{k+1}}.
Due to the potentially high number of nodes that are in the subtree ofxk+1{\displaystyle x_{k+1}}, the information that is necessary to safely backjump fromxk+1{\displaystyle x_{k+1}}is collected during the visit of its subtree. Finding a safe jump can be simplified by two considerations. The first is that the algorithm needs a safe jump, but still works with a jump that is not the highest possible safe jump.
The second simplification is that nodes in the subtree ofxl{\displaystyle x_{l}}that have been skipped by a backjump can be ignored while looking for a backjump forxl{\displaystyle x_{l}}. More precisely, all nodes skipped by a backjump from nodexm{\displaystyle x_{m}}up to nodexl{\displaystyle x_{l}}are irrelevant to the subtree rooted atxm{\displaystyle x_{m}}, and also irrelevant are their other subtrees.
Indeed, if an algorithm went down from nodexl{\displaystyle x_{l}}toxm{\displaystyle x_{m}}via a path but backjumps in its way back, then it could have gone directly fromxl{\displaystyle x_{l}}toxm{\displaystyle x_{m}}instead. Indeed, the backjump indicates that the nodes betweenxl{\displaystyle x_{l}}andxm{\displaystyle x_{m}}are irrelevant to the subtree rooted atxm{\displaystyle x_{m}}. In other words, a backjump indicates that the visit of a region of the search tree had been a mistake. This part of the search tree can therefore be ignored when considering a possible backjump fromxl{\displaystyle x_{l}}or from one of its ancestors.
This fact can be exploited by collecting, in each node, a set of previously assigned variables whose evaluation suffices to prove that no solution exists in the subtree rooted at the node. This set is built during the execution of the algorithm. When retracting from a node, this set is removed the variable of the node and collected in the set of the destination of backtracking or backjumping. Since nodes that are skipped from backjumping are never retracted from, their sets are automatically ignored.
The rationale of graph-based backjumping is that a safe jump can be found by checking which of the variablesx1,…,xk{\displaystyle x_{1},\ldots ,x_{k}}are in a constraint with the variablesxk+1,xk+2,...{\displaystyle x_{k+1},x_{k+2},...}that are instantiated in leaf nodes. For every leaf node and every variablexi{\displaystyle x_{i}}of indexi>k{\displaystyle i>k}that is instantiated there, the indexes less than or equal tok{\displaystyle k}whose variable is in a constraint withxi{\displaystyle x_{i}}can be used to find safe jumps. In particular, when all values forxk+1{\displaystyle x_{k+1}}have been tried, this set contains the indexes of the variables whose evaluations allow proving that no solution can be found by visiting the subtree rooted atxk+1{\displaystyle x_{k+1}}. As a result, the algorithm can backjump to the highest index in this set.
The fact that nodes skipped by backjumping can be ignored when considering a further backjump can be exploited by the following algorithm. When retracting from a leaf node, the set of variables that are in constraint with it is created and "sent back" to its parent, or ancestor in case of backjumping. At every internal node, a set of variables is maintained. Every time a set of variables is received from one of its children or descendants, their variables are added to the maintained set. When further backtracking or backjumping from the node, the variable of the node is removed from this set, and the set is sent to the node that is the destination of backtracking or backjumping. This algorithm works because the set maintained in a node collects all variables that are relevant to prove unsatisfiability in the leaves that are descendants of this node. Since sets of variables are only sent when retracing from nodes, the sets collected at nodes skipped by backjumping are automatically ignored.
Conflict-based backjumping (a.k.a.conflict-directed backjumping) is a more refined algorithm and sometimes able to achieve larger backjumps. It is based on checking not only the common presence of two variables in the same constraint but also on whether the constraint actually caused any inconsistency. In particular, this algorithm collects one of the violated constraints in every leaf. At every node, the highest index of a variable that is in one of the constraints collected at the leaves is a safe jump.
While the violated constraint chosen in each leaf does not affect the safety of the resulting jump, choosing constraints of highest possible indices increases the highness of the jump. For this reason, conflict-based backjumping orders constraints in such a way that constraints over lower indices variables are preferred over constraints on higher index variables.
Formally, a constraintC{\displaystyle C}is preferred over another oneD{\displaystyle D}if the highest index of a variable inC{\displaystyle C}but not inD{\displaystyle D}is lower than the highest index of a variable inD{\displaystyle D}but not inC{\displaystyle C}. In other words, excluding common variables, the constraint that has the all lower indices is preferred.
In a leaf node, the algorithm chooses the lowest indexi{\displaystyle i}such thatx1,…,xi{\displaystyle x_{1},\ldots ,x_{i}}is inconsistent with the last variable evaluated in the leaf. Among the constraints that are violated in this evaluation, it chooses the most preferred one, and collects all its indices less thank+1{\displaystyle k+1}. This way, when the algorithm comes back to the variablexk+1{\displaystyle x_{k+1}}, the lowest collected index identifies a safe jump.
In practice, this algorithm is simplified by collecting all indices in a single set, instead of creating a set for every value ofk{\displaystyle k}. In particular, the algorithm collects, in each node, all sets coming from its descendants that have not been skipped by backjumping. When retracting from this node, this set is removed the variable of the node and collected into the destination of backtracking or backjumping.
Conflict-directed backjumping was proposed forConstraint Satisfaction ProblemsbyPatrick Prosserin his seminal 1993 paper[4]
|
https://en.wikipedia.org/wiki/Backjumping
|
Thesecond continuum hypothesis, also calledLuzin's hypothesisorLuzin's second continuum hypothesis, is the hypothesis that2ℵ0=2ℵ1{\displaystyle 2^{\aleph _{0}}=2^{\aleph _{1}}}. It is the negation ofa weakened form,2ℵ0<2ℵ1{\displaystyle 2^{\aleph _{0}}<2^{\aleph _{1}}}, of theContinuum Hypothesis(CH). It was discussed byNikolai Luzinin 1935, although he did not claim to be the first to postulate it.[note 1][2][3]: 157, 171[4]: §3[1]: 130–131The statement2ℵ0<2ℵ1{\displaystyle 2^{\aleph _{0}}<2^{\aleph _{1}}}may also be called Luzin's hypothesis.[2]
The second continuum hypothesis is independent ofZermelo–Fraenkel set theorywith theAxiom of Choice(ZFC): its truth is consistent with ZFC since it is true inCohen's model of ZFC with the negation of the Continuum Hypothesis;[5][6]: 109–110its falsity is also consistent since it is contradicted by theContinuum Hypothesis, which follows fromV=L. It is implied byMartin's Axiomtogether with the negation of the CH.[2]
|
https://en.wikipedia.org/wiki/Second_continuum_hypothesis
|
Inmathematical analysis, themaximumandminimum[a]of afunctionare, respectively, the greatest and least value taken by the function. Known generically asextremum,[b]they may be defined either within a givenrange(thelocalorrelativeextrema) or on the entiredomain(theglobalorabsoluteextrema) of a function.[1][2][3]Pierre de Fermatwas one of the first mathematicians to propose a general technique,adequality, for finding the maxima and minima of functions.
As defined inset theory, the maximum and minimum of asetare thegreatest and least elementsin the set, respectively. Unboundedinfinite sets, such as the set ofreal numbers, have no minimum or maximum.
Instatistics, the corresponding concept is thesample maximum and minimum.
A real-valuedfunctionfdefined on adomainXhas aglobal(orabsolute)maximum pointatx∗, iff(x∗) ≥f(x)for allxinX. Similarly, the function has aglobal(orabsolute)minimum pointatx∗, iff(x∗) ≤f(x)for allxinX. The value of the function at a maximum point is called themaximum valueof the function, denotedmax(f(x)){\displaystyle \max(f(x))}, and the value of the function at a minimum point is called theminimum valueof the function, (denotedmin(f(x)){\displaystyle \min(f(x))}for clarity). Symbolically, this can be written as follows:
The definition of global minimum point also proceeds similarly.
If the domainXis ametric space, thenfis said to have alocal(orrelative)maximum pointat the pointx∗, if there exists someε> 0 such thatf(x∗) ≥f(x)for allxinXwithin distanceεofx∗. Similarly, the function has alocal minimum pointatx∗, iff(x∗) ≤f(x) for allxinXwithin distanceεofx∗. A similar definition can be used whenXis atopological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows:
The definition of local minimum point can also proceed similarly.
In both the global and local cases, the concept of astrict extremumcan be defined. For example,x∗is astrict global maximum pointif for allxinXwithx≠x∗, we havef(x∗) >f(x), andx∗is astrict local maximum pointif there exists someε> 0such that, for allxinXwithin distanceεofx∗withx≠x∗, we havef(x∗) >f(x). Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points.
Acontinuousreal-valued function with acompactdomain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and boundedintervalofreal numbers(see the graph above).
Finding global maxima and minima is the goal ofmathematical optimization. If a function is continuous on a closed interval, then by theextreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the greatest (or least) one.
Fordifferentiable functions,Fermat's theoremstates that local extrema in the interior of a domain must occur atcritical points(or points where the derivative equals zero).[4]However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using thefirst derivative test,second derivative test, orhigher-order derivative test, given sufficient differentiability.[5]
For any function that is definedpiecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is greatest (or least).
For a practical example,[6]assume a situation where someone has200{\displaystyle 200}feet of fencing and is trying to maximize the square footage of a rectangular enclosure, wherex{\displaystyle x}is the length,y{\displaystyle y}is the width, andxy{\displaystyle xy}is the area:
The derivative with respect tox{\displaystyle x}is:
Setting this equal to0{\displaystyle 0}
reveals thatx=50{\displaystyle x=50}is our onlycritical point.
Now retrieve theendpointsby determining the interval to whichx{\displaystyle x}is restricted. Since width is positive, thenx>0{\displaystyle x>0}, and sincex=100−y{\displaystyle x=100-y},that implies thatx<100{\displaystyle x<100}.Plug in critical point50{\displaystyle 50},as well as endpoints0{\displaystyle 0}and100{\displaystyle 100},intoxy=x(100−x){\displaystyle xy=x(100-x)},and the results are2500,0,{\displaystyle 2500,0,}and0{\displaystyle 0}respectively.
Therefore, the greatest area attainable with a rectangle of200{\displaystyle 200}feet of fencing is50×50=2500{\displaystyle 50\times 50=2500}.[6]
For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for alocalmaximum are similar to those of a function with only one variable. The firstpartial derivativesas toz(the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of asaddle point. For use of these conditions to solve for a maximum, the functionzmust also bedifferentiablethroughout. Thesecond partial derivative testcan help classify the point as a relative maximum or relative minimum.
In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable functionfdefined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use theintermediate value theoremandRolle's theoremto prove this bycontradiction). In two and more dimensions, this argument fails. This is illustrated by the function
whose only critical point is at (0,0), which is a local minimum withf(0,0) = 0. However, it cannot be a global one, becausef(2,3) = −5.
If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of afunctional), then the extremum is found using thecalculus of variations.
Maxima and minima can also be defined for sets. In general, if anordered setShas agreatest elementm, thenmis amaximal elementof the set, also denoted asmax(S){\displaystyle \max(S)}. Furthermore, ifSis a subset of an ordered setTandmis the greatest element ofSwith (respect to order induced byT), thenmis aleast upper boundofSinT. Similar results hold forleast element,minimal elementandgreatest lower bound. The maximum and minimum function for sets are used indatabases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions.
In the case of a generalpartial order, aleast element(i.e., one that is less than all others) should not be confused with theminimal element(nothing is lesser). Likewise, agreatest elementof apartially ordered set(poset) is anupper boundof the set which is contained within the set, whereas themaximal elementmof a posetAis an element ofAsuch that ifm≤b(for anybinA), thenm=b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable.
In atotally orderedset, orchain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the termsminimumandmaximum.
If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set ofnatural numbershas no maximum, though it has a minimum. If an infinite chainSis bounded, then theclosureCl(S) of the set occasionally has a minimum and a maximum, in which case they are called thegreatest lower boundand theleast upper boundof the setS, respectively.
|
https://en.wikipedia.org/wiki/Maximum_and_minimum
|
Incryptography, apreimage attackoncryptographic hash functionstries to find amessagethat has a specific hash value. A cryptographic hash function should resist attacks on itspreimage(set of possible inputs).
In the context of attack, there are two types of preimage resistance:
These can be compared with acollision resistance, in which it is computationally infeasible to find any two distinct inputsx,x′that hash to the same output; i.e., such thath(x) =h(x′).[1]
Collision resistance implies second-preimage resistance. Second-preimage resistance implies preimage resistance only if the size of the hash function's inputs can be substantially (e.g., factor 2) larger than the size of the hash function's outputs.[1]Conversely, a second-preimage attack implies a collision attack (trivially, since, in addition tox′,xis already known right from the start).
By definition, an ideal hash function is such that the fastest way to compute a first or second preimage is through abrute-force attack. For ann-bit hash, this attack has atime complexity2n, which is considered too high for a typical output size ofn= 128 bits. If such complexity is the best that can be achieved by an adversary, then the hash function is considered preimage-resistant. However, there is a general result thatquantum computersperform a structured preimage attack in2n=2n2{\displaystyle {\sqrt {2^{n}}}=2^{\frac {n}{2}}}, which also implies second preimage[2]and thus a collision attack.
Faster preimage attacks can be found bycryptanalysingcertain hash functions, and are specific to that function. Some significant preimage attacks have already been discovered, but they are not yet practical. If a practical preimage attack is discovered, it would drastically affect many Internet protocols. In this case, "practical" means that it could be executed by an attacker with a reasonable amount of resources. For example, a preimaging attack that costs trillions of dollars and takes decades to preimage one desired hash value or one message is not practical; one that costs a few thousand dollars and takes a few weeks might be very practical.
All currently known practical or almost-practical attacks[3][4]onMD5andSHA-1arecollision attacks.[5]In general, a collision attack is easier to mount than a preimage attack, as it is not restricted by any set value (any two values can be used to collide). The time complexity of a brute-force collision attack, in contrast to the preimage attack, is only2n2{\displaystyle 2^{\frac {n}{2}}}.
The computational infeasibility of a first preimage attack on an ideal hash function assumes that the set of possible hash inputs is too large for a brute force search. However if a given hash value is known to have been produced from a set of inputs that is relatively small or is ordered by likelihood in some way, then a brute force search may be effective. Practicality depends on the input set size and the speed or cost of computing the hash function.
A common example is the use of hashes to storepasswordvalidation data for authentication. Rather than store the plaintext of user passwords, an access control system stores a hash of the password. When a user requests access, the password they submit is hashed and compared with the stored value. If the stored validation data is stolen, the thief will only have the hash values, not the passwords. However most users choose passwords in predictable ways and many passwords are short enough that all possible combinations can be tested if fast hashes are used, even if the hash is rated secure against preimage attacks.[6]Special hashes calledkey derivation functionshave been created to slow searches.SeePassword cracking. For a method to prevent the testing of short passwords seesalt (cryptography).
|
https://en.wikipedia.org/wiki/Second_preimage_attack
|
TheMarkov condition, sometimes called theMarkov assumption, is an assumption made inBayesian probability theory, that every node in aBayesian networkisconditionally independentof its nondescendants, given its parents. Stated loosely, it is assumed that a node has no bearing on nodes which do not descend from it. In aDAG, this local Markov condition is equivalent to the global Markov condition, which states thatd-separationsin the graph also correspond to conditional independence relations.[1][2]This also means that a node is conditionally independent of the entire network, given itsMarkov blanket.
The relatedCausal Markov (CM) conditionstates that, conditional on the set of all its direct causes, a node is independent of all variables which are not effects or direct causes of that node.[3]In the event that the structure of a Bayesian network accurately depictscausality, the two conditions are equivalent. However, a network may accurately embody the Markov condition without depicting causality, in which case it should not be assumed to embody the causal Markov condition.
Statisticians are enormously interested in the ways in which certain events and variables are connected. The precise notion of what constitutes a cause and effect is necessary to understand the connections between them. The central idea behind the philosophical study of probabilistic causation is that causes raise the probabilities of their effects,all else being equal.
Adeterministicinterpretation of causation means that ifAcausesB, thenAmustalwaysbe followed byB. In this sense, smoking does not cause cancer because some smokers never develop cancer.
On the other hand, aprobabilisticinterpretation simply means that causes raise the probability of their effects. In this sense, changes in meteorological readings associated with a storm do cause that storm, since they raise its probability. (However, simply looking at a barometer does not change the probability of the storm, for a more detailed analysis, see:[4]).
It follows from the definition that ifXandYare inVand are probabilistically dependent, then eitherXcausesY,YcausesX, orXandYare both effects of some common causeZinV.[3]This definition was seminally introduced by Hans Reichenbach as the Common Cause Principle (CCP)[5]
It once again follows from the definition that the parents ofXscreenXfrom other "indirect causes" ofX(parents of Parents(X)) and other effects of Parents(X) which are not also effects ofX.[3]
In a simple view, releasing one's hand from a hammer causes the hammer to fall. However, doing so in outer space does not produce the same outcome, calling into question if releasing one's fingers from a hammeralwayscauses it to fall.
A causal graph could be created to acknowledge that both the presence of gravity and the release of the hammer contribute to its falling. However, it would be very surprising if the surface underneath the hammer affected its falling. This essentially states the Causal Markov Condition, that given the existence of gravity the release of the hammer, it will fall regardless of what is beneath it.
|
https://en.wikipedia.org/wiki/Causal_Markov_condition
|
In mathematics, the values of thetrigonometric functionscan be expressed approximately, as incos(π/4)≈0.707{\displaystyle \cos(\pi /4)\approx 0.707}, or exactly, as incos(π/4)=2/2{\displaystyle \cos(\pi /4)={\sqrt {2}}/2}. Whiletrigonometric tablescontain many approximate values, the exact values for certain angles can be expressed by a combination of arithmetic operations andsquare roots. The angles with trigonometric values that are expressible in this way are exactly those that can be constructed with acompass and straight edge, and the values are calledconstructible numbers.
The trigonometric functions of angles that are multiples of 15°, 18°, or 22.5° have simple algebraic values. These values are listed in the following table for angles from 0° to 45°[1][2](seebelowfor proofs). In the table below, the label "Undefined" represents a ratio1:0.{\displaystyle 1:0.}If the codomain of the trigonometric functions is taken to be thereal numbersthese entries areundefined, whereas if the codomain is taken to be theprojectively extended real numbers, these entries take the value∞{\displaystyle \infty }(seedivision by zero).
For angles outside of this range, trigonometric values can be found by applyingreflection and shift identitiessuch as
Atrigonometric numberis a number that can be expressed as thesine or cosineof arationalmultiple ofπradians.[3]Sincesin(x)=cos(x−π/2),{\displaystyle \sin(x)=\cos(x-\pi /2),}the case of a sine can be omitted from this definition. Therefore any trigonometric number can be written ascos(2πk/n){\displaystyle \cos(2\pi k/n)}, wherekandnare integers. This number can be thought of as the real part of thecomplex numbercos(2πk/n)+isin(2πk/n){\displaystyle \cos(2\pi k/n)+i\sin(2\pi k/n)}.De Moivre's formulashows that numbers of this form areroots of unity:
Since the root of unity is arootof the polynomialxn− 1, it isalgebraic. Since the trigonometric number is the average of the root of unity and itscomplex conjugate, and algebraic numbers are closed under arithmetic operations, every trigonometric number is algebraic.[3]The minimal polynomials of trigonometric numbers can beexplicitly enumerated.[4]In contrast, by theLindemann–Weierstrass theorem, the sine or cosine of any non-zero algebraic number is always transcendental.[5]
The real part of any root of unity is a trigonometric number. ByNiven's theorem, the only rational trigonometric numbers are 0, 1, −1, 1/2, and −1/2.[6]
An angle can be constructed with a compass and straightedge if and only if its sine (or equivalently cosine) can be expressed by a combination of arithmetic operations and square roots applied to integers.[7]Additionally, an angle that is a rational multiple ofπ{\displaystyle \pi }radians is constructible if and only if, when it is expressed asaπ/b{\displaystyle a\pi /b}radians, whereaandbarerelatively primeintegers, theprime factorizationof the denominator,b, is the product of somepower of twoand any number of distinctFermat primes(a Fermat prime is a prime number one greater than a power of two).[8]
Thus, for example,2π/15=24∘{\displaystyle 2\pi /15=24^{\circ }}is a constructible angle because 15 is the product of the Fermat primes 3 and 5. Similarlyπ/12=15∘{\displaystyle \pi /12=15^{\circ }}is a constructible angle because 12 is a power of two (4) times a Fermat prime (3). Butπ/9=20∘{\displaystyle \pi /9=20^{\circ }}is not a constructible angle, since9=3⋅3{\displaystyle 9=3\cdot 3}is not the product ofdistinctFermat primes as it contains 3 as a factor twice, and neither isπ/7≈25.714∘{\displaystyle \pi /7\approx 25.714^{\circ }}, since 7 is not a Fermat prime.[9]
It results from the above characterisation that an angle of an integer number of degrees is constructible if and only if this number of degrees is a multiple of3.
From areflection identity,cos(45∘)=sin(90∘−45∘)=sin(45∘){\displaystyle \cos(45^{\circ })=\sin(90^{\circ }-45^{\circ })=\sin(45^{\circ })}. Substituting into thePythagorean trigonometric identitysin(45∘)2+cos(45∘)2=1{\displaystyle \sin(45^{\circ })^{2}+\cos(45^{\circ })^{2}=1}, one obtains theminimal polynomial2sin(45∘)2−1=0{\displaystyle 2\sin(45^{\circ })^{2}-1=0}. Taking the positive root, one findssin(45∘)=cos(45∘)=1/2=2/2{\displaystyle \sin(45^{\circ })=\cos(45^{\circ })=1/{\sqrt {2}}={\sqrt {2}}/2}.
A geometric way of deriving the sine or cosine of 45° is by considering an isosceles right triangle with leg length 1. Since two of the angles in an isosceles triangle are equal, if the remaining angle is 90° for a right triangle, then the two equal angles are each 45°. Then by the Pythagorean theorem, the length of the hypotenuse of such a triangle is2{\displaystyle {\sqrt {2}}}. Scaling the triangle so that its hypotenuse has a length of 1 divides the lengths by2{\displaystyle {\sqrt {2}}}, giving the same value for the sine or cosine of 45° given above.
The values of sine and cosine of 30 and 60 degrees are derived by analysis of theequilateral triangle. In an equilateral triangle, the 3 angles are equal and sum to 180°, therefore each corner angle is 60°. Bisecting one corner, thespecial right trianglewith angles 30-60-90 is obtained. By symmetry, the bisected side is half of the side of the equilateral triangle, so one concludessin(30∘)=1/2{\displaystyle \sin(30^{\circ })=1/2}. The Pythagorean and reflection identities then givesin(60∘)=cos(30∘)=1−(1/2)2=3/2{\displaystyle \sin(60^{\circ })=\cos(30^{\circ })={\sqrt {1-(1/2)^{2}}}={\sqrt {3}}/2}.
The value ofsin(18∘){\displaystyle \sin(18^{\circ })}may be derived using themultiple angle formulasfor sine and cosine.[10]By the double angle formula for sine:
By the triple angle formula for cosine:
Since sin(36°) = cos(54°), we equate these two expressions and cancel a factor of cos(18°):
This quadratic equation has only one positive root:
The Pythagorean identity then givescos(18∘){\displaystyle \cos(18^{\circ })}, and the double and triple angle formulas give sine and cosine of 36°, 54°, and 72°. Thencos(36∘)=(5+1)/4=φ/2{\displaystyle \cos(36^{\circ })=({\sqrt {5}}+1)/4=\varphi /2}, whereφ{\displaystyle \varphi }is thegolden ratio.
The sines and cosines of all other angles between 0 and 90° that are multiples of 3° can be derived from the angles described above and thesum and difference formulas. Specifically,[11]
For example, since24∘=60∘−36∘{\displaystyle 24^{\circ }=60^{\circ }-36^{\circ }}, its cosine can be derived by the cosine difference formula:
If the denominator,b, is multiplied by additional factors of 2, the sine and cosine can be derived with thehalf-angle formulas. For example, 22.5° (π/8 rad) is half of 45°, so its sine and cosine are:[12]
Repeated application of the half-angle formulas leads tonested radicals, specifically nestedsquare roots of 2of the form2±⋯{\displaystyle {\sqrt {2\pm \cdots }}}. In general, the sine and cosine of most angles of the formβ/2n{\displaystyle \beta /2^{n}}can be expressed using nested square roots of 2 in terms ofβ{\displaystyle \beta }. Specifically, if one can write an angle asα=π(12−∑i=1k∏j=1ibj2i+1)=π(12−b14−b1b28−b1b2b316−…−b1b2…bk2k+1){\displaystyle \alpha =\pi \left({\frac {1}{2}}-\sum _{i=1}^{k}{\frac {\prod _{j=1}^{i}b_{j}}{2^{i+1}}}\right)=\pi \left({\frac {1}{2}}-{\frac {b_{1}}{4}}-{\frac {b_{1}b_{2}}{8}}-{\frac {b_{1}b_{2}b_{3}}{16}}-\ldots -{\frac {b_{1}b_{2}\ldots b_{k}}{2^{k+1}}}\right)}wherebk∈[−2,2]{\displaystyle b_{k}\in [-2,2]}andbi{\displaystyle b_{i}}is -1, 0, or 1 fori<k{\displaystyle i<k}, then[13]cos(α)=b122+b22+b32+…+bk−12+2sin(πbk4){\displaystyle \cos(\alpha )={\frac {b_{1}}{2}}{\sqrt {2+b_{2}{\sqrt {2+b_{3}{\sqrt {2+\ldots +b_{k-1}{\sqrt {2+2\sin \left({\frac {\pi b_{k}}{4}}\right)}}}}}}}}}and ifb1≠0{\displaystyle b_{1}\neq 0}then[13]sin(α)=122−b22+b32+b42+…+bk−12+2sin(πbk4){\displaystyle \sin(\alpha )={\frac {1}{2}}{\sqrt {2-b_{2}{\sqrt {2+b_{3}{\sqrt {2+b_{4}{\sqrt {2+\ldots +b_{k-1}{\sqrt {2+2\sin \left({\frac {\pi b_{k}}{4}}\right)}}}}}}}}}}}For example,13π32=π(12−14+18+116−132){\displaystyle {\frac {13\pi }{32}}=\pi \left({\frac {1}{2}}-{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}-{\frac {1}{32}}\right)}, so one has(b1,b2,b3,b4)=(1,−1,1,−1){\displaystyle (b_{1},b_{2},b_{3},b_{4})=(1,-1,1,-1)}and obtains:cos(13π32)=122−2+2+2sin(−π4)=122−2+2−2{\displaystyle \cos \left({\frac {13\pi }{32}}\right)={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {2+2\sin \left({\frac {-\pi }{4}}\right)}}}}}}={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {2-{\sqrt {2}}}}}}}}}sin(13π32)=122+2+2−2{\displaystyle \sin \left({\frac {13\pi }{32}}\right)={\frac {1}{2}}{\sqrt {2+{\sqrt {2+{\sqrt {2-{\sqrt {2}}}}}}}}}
Since 17 is a Fermat prime, a regular17-gonis constructible, which means that the sines and cosines of angles such as2π/17{\displaystyle 2\pi /17}radians can be expressed in terms of square roots. In particular, in 1796,Carl Friedrich Gaussshowed that:[14][15]
The sines and cosines of other constructible angles of the formk2nπ17{\displaystyle {\frac {k2^{n}\pi }{17}}}(for integersk,n{\displaystyle k,n}) can be derived from this one.
As discussed in§ Constructibility, only certain angles that are rational multiples ofπ{\displaystyle \pi }radians have trigonometric values that can be expressed with square roots. The angle 1°, beingπ/180=π/(22⋅32⋅5){\displaystyle \pi /180=\pi /(2^{2}\cdot 3^{2}\cdot 5)}radians, has a repeated factor of 3 in the denominator and thereforesin(1∘){\displaystyle \sin(1^{\circ })}cannot be expressed using only square roots. A related question is whether it can be expressed using cube roots. The following two approaches can be used, but both result in an expression that involves thecube root of a complex number.
Using the triple-angle identity, we can identifysin(1∘){\displaystyle \sin(1^{\circ })}as a root of a cubic polynomial:sin(3∘)=−4x3+3x{\displaystyle \sin(3^{\circ })=-4x^{3}+3x}. The three roots of this polynomial aresin(1∘){\displaystyle \sin(1^{\circ })},sin(59∘){\displaystyle \sin(59^{\circ })}, and−sin(61∘){\displaystyle -\sin(61^{\circ })}. Sincesin(3∘){\displaystyle \sin(3^{\circ })}is constructible, an expression for it could be plugged intoCardano's formulato yield an expression forsin(1∘){\displaystyle \sin(1^{\circ })}. However, since all three roots of the cubic are real, this is an instance ofcasus irreducibilis, and the expression would require taking the cube root of a complex number.[16][17]
Alternatively, byDe Moivre's formula:
Taking cube roots and adding or subtracting the equations, we have:[17]
|
https://en.wikipedia.org/wiki/Trigonometric_number
|
InWindows NToperating systems, aWindows serviceis acomputer programthatoperates in the background.[1]It is similar in concept to aUnixdaemon.[1]A Windows service must conform to the interface rules and protocols of theService Control Manager, the component responsible for managing Windows services. It is the Services and Controller app, services.exe, that launches all the services and manages their actions, such as start, end, etc.[2]
Windows services can be configured to start when the operating system is started and run in the background as long as Windows is running. Alternatively, they can be started manually or by an event. Windows NT operating systemsinclude numerous serviceswhich run in context of threeuser accounts: System, Network Service and Local Service. These Windows components are often associated withHost Process for Windows Services. Because Windows services operate in the context of their own dedicated user accounts, they can operate when a user is not logged on.
Prior toWindows Vista, services installed as an "interactive service" could interact with Windowsdesktopand show agraphical user interface. In Windows Vista, however, interactive services are deprecated and may not operate properly, as a result ofWindows Service hardening.[3][4]
Windows administrators can manage services via:
The Services snap-in, built uponMicrosoft Management Console, can connect to the local computer or a remote computer on the network, enabling users to:[1]
Thecommand-linetool to manage Windows services is sc.exe. It is available for all versions ofWindows NT.[7]This utility is included withWindows XP[8]and later[9]and also inReactOS.
Thesccommand's scope of management is restricted to the local computer. However, starting withWindows Server 2003, not only canscdo all that the Services snap-in does, but it can also install and uninstall services.[9]
Thesccommand duplicates some features of thenetcommand.[10]
The ReactOS version was developed by Ged Murphy and is licensed under theGPL.[11]
The following example enumerates the status for active services & drivers.[12]
The following example displays the status for theWindows Event logservice.[12]
The Microsoft.PowerShell.Management PowerShell module (included with Windows) has several cmdlets which can be used to manage Windows services:
Windows also includes components that can do a subset of what the snap-in, Sc.exe and PowerShell do. Thenetcommand can start, stop, pause or resume a Windows service.[21]In Windows Vista and later,Windows Task Managercan show a list of installed services and start or stop them.MSConfigcan enable or disable (see startup type description above) Windows services.
Windows services are installed and removed via *.INF setup scripts bySetupAPI; an installed service can be started immediately following its installation, and a running service can be stopped before its deinstallation.[22][23][24]
For a program to run as a Windows service, the program needs to be written to handle service start, stop, and pause messages from theService Control Manager(SCM) through theSystem Services API. SCM is the Windows component responsible for managing service processes.
TheWindows Resource KitforWindows NT 3.51,Windows NT 4.0andWindows 2000provides tools to control the use and registration of services:SrvAny.exeacts as aservice wrapperto handle the interface expected of a service (e.g. handle service_start and respond sometime later with service_started or service_failed) and allow any executable or script to be configured as a service.Sc.exeallows new services to be installed, started, stopped and uninstalled.[25]
|
https://en.wikipedia.org/wiki/Windows_service
|
Dataflow architectureis adataflow-basedcomputer architecturethat directly contrasts the traditionalvon Neumann architectureorcontrol flowarchitecture. Dataflow architectures have noprogram counter, in concept: the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions,[1]so that the order of instruction execution may be hard to predict.
Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as indigital signal processing,network routing,graphics processing,telemetry, and more recently in data warehousing, andartificial intelligence(as: polymorphic dataflow[2]Convolution Engine,[3]structure-driven,[4]dataflowscheduling[5]). It is also very relevant in many software architectures today includingdatabaseengine designs andparallel computingframeworks.[citation needed]
Synchronous dataflow architectures tune to match the workload presented by real-time data path applications such as wire speed packet forwarding. Dataflow architectures that are deterministic in nature enable programmers to manage complex tasks such as processorload balancing, synchronization and accesses to common resources.[6]
Meanwhile, there is a clash of terminology, since the termdataflowis used for a subarea of parallel programming: fordataflow programming.
Hardware architectures for dataflow was a major topic incomputer architectureresearch in the 1970s and early 1980s.Jack DennisofMITpioneered the field of static dataflow architectures while the Manchester Dataflow Machine[7]and MIT Tagged Token architecture were major projects in dynamic dataflow.
The research, however, never overcame the problems related to:
Instructions and their data dependencies proved to be too fine-grained to be effectively distributed in a large network. That is, the time for the instructions and tagged results to travel through a large connection network was longer than the time to do many computations.
Maurice Wilkeswrote in 1995 that "Data flow stands apart as being the most radical of all approaches to parallelism and the one that has been least successful. ... If any practical machine based on data flow ideas and offering real power ever emerges, it will be very different from what the originators of the concept had in mind."[8]
Out-of-order execution(OOE) has become the dominant computing paradigm since the 1990s. It is a form of restricted dataflow. This paradigm introduced the idea of anexecution window. Theexecution windowfollows the sequential order of the von Neumann architecture, however within the window, instructions are allowed to be completed in data dependency order. This is accomplished in CPUs that dynamically tag the data dependencies of the code in the execution window. The logical complexity of dynamically keeping track of the data dependencies, restrictsOOECPUsto a small number of execution units (2-6) and limits the execution window sizes to the range of 32 to 200 instructions, much smaller than envisioned for full dataflow machines.[citation needed]
Designs that use conventional memory addresses as data dependency tags are called static dataflow machines. These machines did not allow multiple instances of the same routines to be executed simultaneously because the simple tags could not differentiate between them.
Designs that usecontent-addressable memory(CAM) are called dynamic dataflow machines. They use tags in memory to facilitate parallelism.
Normally, in the control flow architecture,compilersanalyze programsource codefor data dependencies between instructions in order to better organize the instruction sequences in the binary output files. The instructions are organized sequentially but the dependency information itself is not recorded in the binaries. Binaries compiled for a dataflow machine contain this dependency information.
A dataflow compiler records these dependencies by creating unique tags for each dependency instead of using variable names. By giving each dependency a unique tag, it allows the non-dependent code segments in the binary to be executedout of orderand in parallel. Compiler detects the loops, break statements and various programming control syntax for data flow.
Programs are loaded into the CAM of a dynamic dataflow computer. When all of the tagged operands of an instruction become available (that is, output from previous instructions and/or user input), the instruction is marked as ready for execution by anexecution unit.
This is known asactivatingorfiringthe instruction. Once an instruction is completed by an execution unit, its output data is sent (with its tag) to the CAM. Any instructions that are dependent upon this particular datum (identified by its tag value) are then marked as ready for execution. In this way, subsequent instructions are executed in proper order, avoidingrace conditions. This order may differ from the sequential order envisioned by the human programmer, the programmed order.
An instruction, along with its required data operands, is transmitted to an execution unit as a packet, also called aninstruction token. Similarly, output data is transmitted back to the CAM as adata token. The packetization of instructions and results allows for parallel execution of ready instructions on a large scale.
Dataflow networks deliver the instruction tokens to the execution units and return the data tokens to the CAM. In contrast to the conventionalvon Neumann architecture, data tokens are not permanently stored in memory, rather they are transient messages that only exist when in transit to the instruction storage.
|
https://en.wikipedia.org/wiki/Dataflow_architecture
|
TheUnited States 700 MHz FCC wirelessspectrum auction, officially known asAuction 73,[1]was started by theFederal Communications Commission(FCC) on January 24, 2008 for the rights to operate the 700 MHzradio frequencybandin theUnited States. The details of process were the subject of debate among severaltelecommunicationscompanies, includingVerizon Wireless,AT&T Mobility, as well as the Internet companyGoogle. Much of the debate swirled around the open access requirements set down by the Second Report and Order released by the FCC determining the process and rules for the auction. All bidding was required by law to commence by January 28.[2]
Full-powerTV stationswere forcedto transition to digital broadcastingin order to free 108 MHz ofradio spectrumfor newerwirelessservices. Most analog broadcasts ceased on June 12, 2009.
The 700 MHz spectrum was previously used for analogtelevision broadcasting, specificallyUHF channels 52 through 69. The FCC ruled that the 700 MHz spectrum would no longer be necessary for TV because of the improvedspectral efficiencyof digital broadcasts. Digital broadcasts allow TV channels to be broadcast onadjacent channelswithout having to leaveemptyTV channelsasguard bandsbetween them.[3]All broadcasters were required to move to the frequencies occupied by channels 2 through 51 as part of the digital TV transition.
A similar reallocation was employed in 1989 to expandanalog cellphone service, having previously eliminated TV channels 70-83 at the uppermost UHF frequencies. This created an unusual situation where old TV tuning equipment was able tolisten tocellularphone calls, although such activity was made illegal and the FCC prohibited the sale of future devices with that capability.
Some of the 700 MHzspectrum licenseswere already auctioned in Auctions 44 and 49. Paired channels 54/59 (lower-700 MHz block C) and unpaired channel 55 (block D) were sold and in some areas were already being used for broadcasting and Internet access. For example, QualcommMediaFLOin 2007 started using channel 55 for broadcastingmobile TVto cell phones in some markets.[4]Qualcomm later ended the service and sold (at a large profit) channel 55 nationwide to AT&T Mobility, along with channel 56 in theNortheast Corridorand much ofCalifornia.Dish Networkbought channel 56 (block E) licenses in the remainder of the nation'smedia markets, so far using it only for testingATSC-M/H. As of 2015[update], AT&T does not appear to be using block D or E (band class 29) yet, but plans to uselink aggregationfor increaseddownloadspeeds and capacity.[5]
For the 700-MHz auction, the FCC designed a new multi-round process that limits the number of package bids that each bidder can submit (12 items and 12 package bids) and the prices at which they can be submitted, provides computationally intensive feedback prices similar to the pricing approach.[6]This package bidding process (which is often referred to ascombinatorial auctions) was the first of its kind to be used by the FCC in an actual auction. Bidders were allowed to bid on individual licenses or on an all-or-nothing bid which could be done up to twelve packages, which the bidder determined at any point in the auction. Doing the auction this way allowed the bidder to avoid the exposure problem when licenses are complements. The provisional winning bids are the set of consistent bids that maximize total revenues. The 700 MHz auction represented a good test-case for package bidding for two reasons. First, the 700 MHz auction only involves 12 licenses: 2 bands (one 10 MHz and one 20 MHz) in each of the 6 regions.[7]Secondly, prospective bidders had expressed interest in alternative packaging because some Internet service providers had different needs and the flexibility would benefit them. The FCC issued Public Notice DA00-1486 adopted and described the package bidding rules for the 700 MHz auction.
The FCC's original proposal allowed only nine package bids: the six 30 MHz regional bids and three nationwide bids (10, 20, or 30 MHz). Although these nine packages were consistent with the expressed desires of many prospective bidders, others felt that the nine packages were too restrictive. The activity rule is unchanged, aside from a new definition of activity and a lower activity requirement of 50%. A bidder must be active on 50% of its current eligibility or its eligibility in the next round will be reduced to two times its activity. Bids made in different rounds were treated as mutually exclusive and a bidder wishing to add a license or package to its provisional winnings must renew the provisional winning bids in the current round.
The FCC placed rules onpublic safetyfor the auction. 20 MHz of the valuable 700 MHz spectrum were set aside for the creation of a public/private partnership that will eventually roll out to a new nationwide broadband network tailored to the requirements of public safety. The FCC offered the commercial licensee extra spectrum adjacent to the public safety block that the licensee can use as it wants. The licensee is allowed to use whatever bandwidth that is available on the public safety side of the network to offer data services of their own.[8]
In an effort to encouragenetwork neutrality, groups such asPublic Knowledge,MoveOn.org,Media Access Project, along with individuals such asCraigslistfounderCraig Newmark, and Harvard Law professorLawrence Lessigappealed to theFederal Communications Commissionto make the newly freed airways open access to the public.[9]
Prior to the bidding process, Google asked that the spectrum be free to lease wholesale and the devices operating under the spectrum be open. At the time, many providers such as Verizon and AT&T used technological measures to block external applications. In return, Google guaranteed a minimum bid of $4.6 billion. Google's specific requests were the adoption of these policies:
The result of the auction was that Google was outbid by others in the auction, triggering the open platform restrictions Google had asked for without having to actually purchase any licenses.[11]Google was actively involved in the bidding process although it had no intentions of actually winning any licenses.[12]The reason for this was that it could push up the price of the bidding process in order to reach the US$4.6B reserve price, therefore triggering the open source restrictions listed above. Had Google not been actively involved in the bidding process, it would have made sense for businesses to suppress their bidding strategies in order to trigger a new auction without the restrictions imposed by Google and the FCC.[11]Google's upfront payment of $287 million in order to participate in the bidding process was largely recovered after the auction since it had not actually purchased any licences. Despite this, Google ended paying interest costs, which resulted in an estimated loss of 13 million dollars.[11]
The FCC ruled in favor of Google's requests.[13]Only two of the four requirements were put in place on the upper C-Block, open applications and open devices.[14]Google had wanted the purchaser to allow 'rental' of the blocks to different providers.
In retaliation, on September 13, 2007, Verizon filed a lawsuit against the Federal Communications Commission to remove the provisions Google had asked for. Verizon called the rules "arbitrary and capricious, unsupported by substantial evidence and otherwise contrary to law."[15][16][17][18]
On October 23, Verizon chose to drop the lawsuit after losing its appeal for a speedy resolution on October 3. However,CTIA - The Wireless Associationchallenged the same regulations in a lawsuit filed the same day.[19]On November 13, 2008, CTIA dropped its lawsuit against the FCC.[20]
The auction divided UHF spectrum into five blocks:[21]
The FCC placed very detailed rules about the process of this auction of the 698–806 MHz part of the wireless spectrum. Bids were anonymous and designed to promote competition. The aggregatereserve pricefor all block C licenses was approximately $4.6 billion.[22]The total reserve price for all five blocks being auctioned in Auction 73 was just over $10 billion.[22]
Auction 73 generally went as planned by telecommunications analysts. In total, Auction 73 raised $19.592 billion.[23]Verizon WirelessandAT&T Mobilitytogether accounted for $16.3 billion of the total revenue.[24]Of the 214 approved applicants, 101 successfully purchased at least one license. Despite their heavy involvement with the auction,Googledid not purchase any licenses. However, Google did place the minimum bid on Block C licenses in order to ensure that the license would be required to be open-access.[25][26][27]
The results for each of the five blocks:
After the end of Auction 73, there remained some licenses that either went unsold or were defaulted on by the winning bidder from Blocks A and B. A new auction, Auction 92, was held on July 19, 2011 to sell the 700 MHz band licenses that were still available. The auction closed on July 28, 2011, with 7 bidders having won 16 licenses worth $19.8 million.[30]
Six years after the end of the auction of 700 MHz spectrum, block A remained largely unused, althoughT-Mobile USAbegan to deploy its extended-range LTE in 2015 on licenses purchased from Verizon Wireless and cleared ofRF interferencein several areas by TV stations changing off of channel 51. This delay was caused by technical issues which wereregulatoryand possiblyanticompetitivein nature.
After the March 2008 conclusion of Auction 73, Motorola initiated steps to have3GPPestablish a new industry standard (later designated as band class 17) that would be limited to the lower 700 MHz B and C blocks. In proposing band class 17, Motorola cited the need to address concerns about high-power transmissions of TV stations still broadcasting on channel 51 and the lower-700 MHz D and E blocks. As envisioned and ultimately adopted, the band class 17 standard allowsLTEoperations in only the lower-700 MHz B and C blocks using a specific signaling protocol that would filter out all other frequencies. Although band class 17 operates on two of the three blocks common to band class 12, band class 17 devices use more narrowelectronic filters, which have the effect of permitting a smaller range of frequencies topass throughthe filter. In addition, band class 12 and 17signalingprotocolsare not compatible.[31]
The creation of two non-interoperable band classes has had numerous effects. Customers are unable to switch between a licensee deploying its service using band class 17 and a licensee that provides its service using band class 12 without purchasing a new device (even when
the two operators use the same 2G and 3G technologies and bands), and band class 12 and 17 devices cannotroamon each other'scellular networks.[31]
When deploying its LTE network,C Spire Wirelessdecided not to use A block because of the lack of band-12 support inmobile devices, issues with roaming, and the increased cost ofbase stationsdue to lack of supply.[32]US Cellular deployed a band class 12 LTE network, however not all of US Cellular's devices were able to access it. In particular, theiPhone 5SandiPhone 5Ccould not.[33]Other wireless telecommunication providers launched LTE band class 12 networks, but have not been able to offersmartphonesthat access them, instead resorting tofixedormobilewireless broadband modems.[34]As of April 2015, only three telecom providers were offering smartphones that use band 12: US Cellular, T-Mobile USA, and Nex-Tech Wireless.
While smaller US telecommunication providers were upset at the lack of interoperability,AT&Tdefended the creation of band 17 and told the other carriers to seek interoperability withSprintandT-Mobileinstead.[35]However, in September 2013, AT&T changed its stance and committed to support and sell band-12 devices.[36]
Following AT&T's commitment the Federal Communications Commission ruled:[31]
Consistent with these commitments, AT&T anticipates that its focus and advocacy within the 3GPP standards setting process will shift to band-12-related projects and work streams. AT&T must place priority within the 3GPP RAN committee on the development of various band-12 carrier-aggregation scenarios. Upon completing implementation of the MFBI feature, AT&T anticipates that its focus on new standards related to the paired lower-700 MHz spectrum will be almost exclusively on band 12 configurations, features and capabilities.[31]
Additionally,Dish Networkagreed to lower its maximumeffective radiated powerlevels on block E, which is on the loweradjacent channelto the downlink (tower-to-user transmissions) for block A. It did this in exchange for the FCC allowing it to operate the block as a one-way service, effectively making it a broadcast, although it could still be interactive through other means. Since Dish has already been experimentally operating it as asingle-frequency network, this should not have a significant effect on whatever service it might offer in the future.
|
https://en.wikipedia.org/wiki/United_States_2008_wireless_spectrum_auction
|
Rugged individualism, derived fromindividualism, is a term that indicates that an individual is self-reliant and independent from outside (usually government or some other form of collective) assistance or support. While the term is often associated with the notion oflaissez-faireand associated adherents, it was actuallycoinedby United States presidentHerbert Hoover.[1][2]
American rugged individualism has its origins in the American frontier experience. Throughout its evolution, theAmerican frontierwas generally sparsely populated and had very little infrastructure in place. Under such conditions, individuals had to provide for themselves to survive. This kind of environment forced people to work in isolation from the larger community and may have altered attitudes at the frontier in favor of individualistic thought over collectivism.[3]
Through the mid-twentieth century, the concept was championed by Hoover's former Secretary of the Interior and long-time president ofStanford University,Ray Lyman Wilbur, who wrote: "It is common talk that every individual is entitled to economic security. The only animals and birds I know that have economic security are those that have been domesticated—and the economic security they have is controlled by the barbed-wire fence, the butcher's knife and the desire of others. They are milked, skinned, egged or eaten up by their protectors."[4]
Martin Luther King Jr.notably remarked on the term in his speech "The Other America" on March 10, 1968: "This country hassocialism for the rich, rugged individualism for the poor."[5]Bernie Sandersreferenced King's quote in a 2019 speech.[6]
The ideal of rugged individualism continues to be a part of American thought. In 2016, a poll byPew Researchfound that 57% of Americans did not believe that success in life was determined by forces outside of their control. Additionally, the same poll found that 58% of Americans valued a non-interventionist government over one that actively worked to further the needs of society.[7]
Academics interviewed in the 2020 bookRugged Individualism and the Misunderstanding of American Inequality, co-written byNoam Chomsky, largely found that the continued belief in this brand of individualism is a strong factor in American policies surrounding social spending and welfare. Americans who more strongly believe in the values espoused by rugged individualism tend to view those who seek government assistance as being responsible for their[who?]position, leading to decreased support for welfare programs and increased support for stricter criteria for receiving government help.[8]The influence of American individualistic thought extends to government regulation as well. Areas of the country which were part of the American frontier for longer, and were therefore more influenced by the frontier experience, were found to be more likely to be supportive ofRepublicancandidates, who often vote against regulations such asgun control, minimum wage increases, and environmental regulation.[3]
A 2021 research article posits that American “rugged individualism” hampered social distancing and mask use during theCOVID-19 pandemic.[9]
|
https://en.wikipedia.org/wiki/Rugged_individualism
|
hCardis amicroformatfor publishing the contact details (which might be no more than the name) of people, companies, organizations, and places, inHTML,Atom,RSS, or arbitraryXML.[1]The hCard microformat does this using a 1:1 representation ofvCard(RFC 2426) properties and values, identified using HTML classes andrelattributes.
It allows parsing tools (for example other websites, orFirefox'sOperator extension) to extract the details, and display them, using some other websites ormappingtools, index or search them, or to load them into an address-book program.
In May 2009,Googleannounced that they would be parsing the hCard andhReviewandhProductmicroformats, and using them to populate search-result pages.[2]In September 2010Googleannounced their intention to surface hCard,hReviewinformation in their local search results.[3]In February 2011,Facebookbegan using hCard to mark up event venues.[4]
Consider the HTML:
With microformat markup, that becomes:
A profile may optionally be included in the page header:
Here the propertiesfn,[5]nickname,org(organization),tel(telephone number) andurl(web address) have been identified using specific class names; and the whole thing is wrapped inclass="vcard"which indicates that the other classes form an hcard, and are not just coincidentally named. If the hCard is for an organization or venue, thefnandorgclasses are used on the same element, as in<span class="fn org">Wikipedia</span>or<span class="fn org">Wembley Stadium</span>. Other, optional hCard classes also exist.
It is now possible for software, for example browser plug-ins, to extract the information, and transfer it to other applications, such as an address book.
TheGeo microformatis a part of the hCard specification, and is often used to include the coordinates of a location within an hCard.
Theadrpart of hCard can also be used as a stand-alone microformat.
Here are theWikimedia Foundation's contact details as of February 2023[update], as a live hCard:
The mark-up (wrapped for clarity) used is:
In this example, thefnandorgproperties are combined on one element, indicating that this is the hCard for an organization, not a person.
Other commonly used hCard attributes include
|
https://en.wikipedia.org/wiki/HCard
|
This is a list of theshellcommandsof the most recent version of the Portable Operating System Interface (POSIX) –IEEEStd 1003.1-2024 which is part of theSingle UNIX Specification(SUS). These commands are implemented in many shells on modernUnix,Unix-likeand otheroperating systems. This list does not cover commands for all versions of Unix and Unix-like shells nor other versions of POSIX.
|
https://en.wikipedia.org/wiki/List_of_Unix_commands
|
Irony, in its broadest sense, is thejuxtapositionof what appears to be the case on the surface and what is actually the case or to be expected. It typically figures as arhetorical deviceandliterary technique. In some philosophical contexts, however, it takes on a larger significance as an entire way of life.
Irony has been defined in many different ways, and there is no general agreement about the best way to organize its various types. The nebulous and difficult nature of its definition, however, some English speakers contend, has subjectedironyto abuse.
'Irony' comes from the Greekeironeia(εἰρωνεία) and dates back to the 5th century BCE. This term itself was coined in reference to a stock-character fromOld Comedy(such as that ofAristophanes) known as theeiron, who dissimulates and affects less intelligence than he has—and so ultimately triumphs over his opposite, thealazon, a vain-glorious braggart.[1][2][3]
Although initially synonymous with lying, inPlato's dialogueseironeiacame to acquire a new sense of "an intended simulation which the audience or hearer was meant to recognise".[4]More simply put, it came to acquire the general definition, "the expression of one's meaning by using language that normally signifies the opposite, typically for humorous or emphatic effect".[5]
Until theRenaissance, the Latinironiawas considered a part of rhetoric, usually a species ofallegory, along the lines established byCiceroandQuintiliannear the beginning of the 1st century CE.[6]"Irony" entered the English language as afigure of speechin the 16th century with a meaning similar to the Frenchironie, itself derived from the Latin.[7]
Around the end of the 18th century, "irony" takes on another sense, primarily credited toFriedrich Schlegeland other participants in what came to be known asearly German Romanticism. They advance a concept of irony that is not a mere "artistic playfulness", but a "conscious form of literary creation", typically involving the "consistent alternation of affirmation and negation".[8]No longer just a rhetorical device, on their conception, it refers to an entire metaphysical stance on the world.[9]
It is commonplace to begin a study of irony with the acknowledgement that the term quite simply eludes any single definition.[10][11][12]PhilosopherRichard J. Bernsteinopens hisIronic Lifewith the observation that a survey of the literature on irony leaves the reader with the "dominantimpression" that the authors are simply "talking about different subjects".[13]Indeed,Geoffrey Nunberg, alexical semantician, observes a trend ofsarcasmreplacing the linguistic role of verbal irony as a result of all this confusion.[14]
In the 1906The King's English,Henry Watson Fowlerwrites, "any definition of irony—though hundreds might be given, and very few of them would be accepted—must include this, that the surface meaning and the underlying meaning of what is said are not the same." A consequence of this, he observes, is that an analysis of irony requires the concept of adouble audience"consisting of one party that hearing shall hear & shall not understand, & another party that, when more is meant than meets the ear, is aware both of that more & of the outsiders' incomprehension".[15]
From this basic feature, literary theorist Douglas C. Muecke identifies three basic characteristics of all irony:
According toWayne Booth, this uneven double-character of irony makes it a rhetorically complex phenomenon. Admired by some and feared by others, it has the power to tighten social bonds, but also to exacerbate divisions.[18]
How best to organize irony into distinct types is almost as controversial as how best to define it. There have been many proposals, generally relying on the same cluster of types; still, there is little agreement as to how to organize the types and what if any hierarchical arrangements might exist. Nevertheless, academic reference volumes standardly include at least all four ofverbal irony,dramatic irony,cosmic irony, andRomantic ironyas major types.[19][20][21][22]The latter three types are sometimes contrasted with verbal irony as forms ofsituational irony, that is, irony in which there is no ironist; so, instead of "he is being ironical" one would instead say "it is ironical that".[23][9]
Verbal ironyis "a statement in which the meaning that a speaker employs is sharply different from the meaning that is ostensibly expressed".[1]Moreover, it is producedintentionallyby the speaker, rather than being a literary construct, for instance, or the result of forces outside of their control.[19]Samuel Johnsongives as an example the sentence, "Bolingbrokewas a holy man" (he was anything but).[24][25]Verbal irony is sometimes also considered to encompass various other literary devices such ashyperboleand its opposite,litotes, conscious naïveté, and others.[26][27]
Dramatic ironyprovides the audience with information of which characters are unaware, thereby placing the audience in a position of advantage to recognize their words and actions as counter-productive or opposed to what their situation actually requires.[28]Three stages may be distinguished — installation, exploitation, and resolution (often also called preparation, suspension, and resolution) — producing dramatic conflict in what one character relies or appears to rely upon, thecontraryof which is known by observers (especially the audience, sometimes to other characters within the drama) to be true.[29]Tragic ironyis a specific type of dramatic irony.[30]
Cosmic irony, sometimes also called "the irony of fate", presents agents as always ultimately thwarted by forces beyond human control. It is strongly associated with the works ofThomas Hardy.[28][30]This form of irony is also given metaphysical significance in the work ofSøren Kierkegaard, among other philosophers.[8]
Romantic ironyis closely related to cosmic irony, and sometimes the two terms are treated interchangeably.[9]Romantic irony is distinct, however, in that it is the author who assumes the role of the cosmic force. The narrator inTristram Shandyis one early example.[31]The term is closely associated withFriedrich Schlegeland theearly German Romantics, and in their hands it assumed a metaphysical significance similar to cosmic irony in the hands of Kierkegaard.[9]It was also of central importance to the literary theory advanced byNew Criticismin mid-20th century.[31][27]
Building upon the double-level structure of irony, self-described "ironologist" D. C. Muecke proposes another, complementary way in which we may typify, and so better understand, ironic phenomena. What he proposes is a dual distinction between and among threegradesand fourmodesof ironic utterance.
Grades of irony are distinguished "according to the degree to which the real meaning is concealed". Muecke names themovert,covert, andprivate:[32]
Muecke's typology of modes are distinguished "according to the kind of relationship between the ironist and the irony". He calls theseimpersonal irony,self-disparaging irony,ingénue irony, anddramatized irony:[32]
To consider irony from a rhetorical perspective means to consider it as an act of communication.[40]InA Rhetoric of Irony,Wayne C. Boothseeks to answer the question of "how we manage to share ironies and why we so often do not".[18]
Because irony involves expressing something in a way contrary to literal meaning, it always involves a kind of "translation" on the part of the audience.[41]Booth identifies three principal kinds of agreement upon which the successful translation of irony depends: common mastery of language, shared cultural values, and (for artistic ironies) a common experience of genre.[42]
A consequence of this element of in-group membership is that there is more at stake in whether one grasps an ironic utterance than there is in whether one grasps an utterance presented straight. As he puts it, the use of irony is
An aggressively intellectual exercise that fuses fact and value, requiring us to construct alternative hierarchies and choose among them; [it] demands that we look down on other men's follies or sins; floods us with emotion-charged value judgments which claim to be backed by the mind; accuses other men not only of wrong beliefs but of being wrong at their very foundations and blind to what these foundations imply[.][43]
This is why, when we misunderstand an intended ironic utterance, we often feel more embarrassed about our failure to recognize the incongruity than we typically do when we simply misunderstand a statement of fact.[44]When one's deepest beliefs are at issue, so too, often, is one's pride.[43]Nevertheless, even as it excludes its victims, irony also has the power to build and strengthen the community of those who do understand and appreciate.[45]
Typically "irony" is used, as described above, with respect to some specific act or situation. In more philosophical contexts, however, the term is sometimes assigned a more general significance, in which it is used to describe an entire way of life or a universal truth about the human situation. Even Booth, whose interest is expressly rhetorical, notes that the word "irony" tends to attach to "a type of character — Aristophanes' foxyeirons, Plato's disconcerting Socrates — rather than to any one device".[46]In these contexts, what is expressed rhetorically by cosmic irony is ascribed existential or metaphysical significance. As Muecke puts it, such irony is that of "life itself or any general aspect of life seen as fundamentally and inescapably an ironic state of affairs. No longer is it a case of isolated victims .... we are all victims of impossible situations".[47][48]
This usage has its origins primarily in the work ofFriedrich Schlegeland otherearly 19th-century German Romanticsand inSøren Kierkegaard's analysis ofSocratesinThe Concept of Irony.[49][48]
Friedrich Schlegel was at the forefront of the intellectual movement that has come to be known asFrühromantik, or early German Romanticism, situated narrowly between 1797 and 1801.[50]For Schlegel, the "romantic imperative" (a rejoinder toImmanuel Kant's "categorical imperative") is to break down the distinction between art and life with the creation of a "new mythology" for the modern age.[51]In particular, Schlegel was responding to what he took to be the failure of thefoundationalistenterprise, exemplified for him by the philosophy ofJohann Gottlieb Fichte.[52]
Irony is a response to the apparent epistemic uncertainties of anti-foundationalism. In the words of scholarFrederick C. Beiser, Schlegel presents irony as consisting in "the recognition that, even though we cannot attain truth, we still must forever strive toward it, because only then do we approach it." His model is Socrates, who "knew that he knew nothing", yet never ceased in his pursuit of truth and virtue.[53][54]According to Schlegel, instead of resting upon a single foundation, "the individual parts of a successful synthesis formation support and negate each other reciprocally".[55]
Although Schlegel frequently does describe the Romantic project with a literary vocabulary, his use of the term "poetry" (Poesie) is non-standard. Instead, he goes back to the broader sense of the original Greekpoiētikós, which refers to any kind of making.[56]As Beiser puts it, "Schlegel intentionally explodes the narrow literary meaning ofPoesieby explicitly identifying the poetic with thecreative powerin human beings, and indeed with theproductive principlein nature itself." Poetry in the restricted literary sense is its highest form, but in no way its only form.[57]
According to Schlegel, irony captures the human situation of always striving towards, but never completely possessing, what is infinite or true.[58]
This presentation of Schlegel's account of irony is at odds with many 20th-century interpretations, which, neglecting the larger historical context, have been predominatelypostmodern.[59][60]These readings overstate the irrational dimension of early Romantic thought at the expense of its rational commitments—precisely the dilemma irony is introduced to resolve.[61]
Already in Schlegel's own day,G. W. F. Hegelwas unfavorably contrasting Romantic irony with that of Socrates. On Hegel's reading, Socratic irony partially anticipates his owndialecticalapproach to philosophy. Romantic irony, by contrast, Hegel alleges to be fundamentally trivializing and opposed to all seriousness about what is of substantial interest.[62]According toRüdiger Bubner, however, Hegel's "misunderstanding" of Schlegel's concept of irony is "total" in its denunciation of a figure actually intended to preserve "our openness to a systematic philosophy".[63]
Yet, it is Hegel's interpretation that would be taken up and amplified byKierkegaard, who further extends the critique to Socrates himself.[64]
Thesis VIII of the Danish philosopher Søren Kierkegaard's dissertation,The Concept of Irony with Continual Reference to Socrates, states that "irony as infinite and absolute negativity is the lightest and the weakest form of subjectivity".[65]Although this terminology is Hegelian in origin, Kierkegaard employs it with a somewhat different meaning. Richard J. Bernstein elaborates:
It isinfinitebecause it is directed not against this or that particular existing entity, but against the entire given actuality at a certain time. It is thoroughlynegativebecause it is incapable of offering any positive alternative. Nothing positive emerges out of this negativity. And it isabsolutebecause Socrates refuses to cheat.[66]
In this way, contrary to traditional accounts, Kierkegaard portrays Socrates asgenuinelyignorant. According to Kierkegaard, Socrates is the embodiment of an ironic negativity that dismantles others' illusory knowledge without offering any positive replacement.[67]
Almost all of Kierkegaard's post-dissertation publications were written under a variety of pseudonyms. Scholar K. Brian Söderquist argues that these fictive authors should be viewed as explorations of the existential challenges posed by such an ironic, poetic self-consciousness. Their awareness of their own unlimited powers of self-interpretation prevents them from fully committing to any single self-narrative, and this leaves them trapped in an entirely negative mode of uncertainty.[68]
Nevertheless, seemingly against this, Thesis XV of the dissertation states that "Just as philosophy begins with doubt, so also a life that may be called human begins with irony".[65]Bernstein writes that the emphasis here must be onbegins.[66]Irony is not itself an authentic mode of life, but it is a precondition for attaining such a life. Although pure irony is self-destructive, it generates a space in which it becomes possible to reengage with the world in a genuine mode ofethical passion.[69]For Kierkegaard himself, this took the form of religious inwardness. What is crucial, however, is just to in some way move beyond the purely (or merely) ironic. Irony is what creates the space in which we can learn and meaningfully choose how to live a life worthy (vita digna[70]) of being called human.[71][72]
Referring to earlier self-conscious works such asDon QuixoteandTristram Shandy, D. C. Muecke points particularly toPeter Weiss's 1964 play,Marat/Sade. This work is a play within a play set in a lunatic asylum, in which it is difficult to tell whether the players are speaking only to other players or also directly to the audience. When The Herald says, "The regrettable incident you've just seen was unavoidable indeed foreseen by our playwright", there is confusion as to who is being addressed, the "audience" on the stage or the audience in the theatre. Also, since the play within the play is performed by the inmates of a lunatic asylum, the theatre audience cannot tell whether the paranoia displayed before them is that of the players, or the people they are portraying. Muecke notes that, "in America, Romantic irony has had a bad press", while "in England...[it] is almost unknown."[73]
In a book entitledEnglish Romantic Irony, Anne Mellor writes, referring toByron,Keats,Carlyle,Coleridge, andLewis Carroll:[74]
Romantic irony is both a philosophical conception of the universe and an artistic program. Ontologically, it sees the world as fundamentally chaotic. No order, no far goal of time, ordained by God or right reason, determines the progression of human or natural events […] Of course, romantic irony itself has more than one mode. The style of romantic irony varies from writer to writer […] But however distinctive the voice, a writer is a romantic ironist if and when his or her work commits itself enthusiastically both in content and form to a hovering or unresolved debate between a world of merely man-made being and a world of ontological becoming.
Similarly,metafictionis: "Fiction in which the author self-consciously alludes to the artificiality or literariness of a work by parodying or departing from novelistic conventions (esp. naturalism) and narrative techniques."[75]It is a type of fiction that self-consciously addresses the devices of fiction, thereby exposing the fictional illusion.
Gesa Giesing writes that "the most common form of metafiction is particularly frequent in Romantic literature. The phenomenon is then referred to as Romantic Irony." Giesing notes that "There has obviously been an increased interest in metafiction again after World War II."[76]
For example, Patricia Waugh quotes from several works at the top of her chapter headed "What is metafiction?". These include:
The thing is this. That of all the several ways of beginning a book […] I am confident my own way of doing it is best
Since I've started this story, I've gotten boils […]
Additionally,The Cambridge Introduction to Postmodern Fictionsays ofJohn Fowles'sThe French Lieutenant's Woman, "For the first twelve chapters...the reader has been able to immerse him or herself in the story, enjoying the kind of 'suspension of disbelief' required of realist novels...what follows is a remarkable act of metafictional 'frame-breaking'". As evidence, chapter 13 "notoriously" begins: "I do not know. This story I am telling is all imagination. These characters I create never existed outside my own mind. […] if this is a novel, it cannot be a novel in the modern sense".[78]
A fair amount of confusion has surrounded the issue of the relationship between verbal irony andsarcasm. For instance, various reference sources assert the following:
The psychologistRod A. Martin, inThe Psychology of Humour(2007), is quite clear that irony is where "the literal meaning is opposite to the intended" and sarcasm is "aggressive humor that pokes fun".[84]He has the following examples: for irony he uses the statement "What a nice day" when it is raining. For sarcasm, he citesWinston Churchill, who is supposed to have said, when told byBessie Braddockthat he was drunk, "But I shall be sober in the morning, and you will still be ugly", as being sarcastic, while not saying the opposite of what is intended.
Psychology researchers Lee and Katz have addressed the issue directly. They found that ridicule is an important aspect of sarcasm, but not of verbal irony in general. By this account, sarcasm is a particular kind of personal criticism levelled against a person or group of persons that incorporates verbal irony. For example, a woman reports to her friend that rather than going to a medical doctor to treat her cancer, she has decided to see a spiritual healer instead. In response her friend says sarcastically, "Oh, brilliant, what an ingenious idea, that's really going to cure you." The friend could have also replied with any number of ironic expressions that should not be labeled as sarcasm exactly, but still have many shared elements with sarcasm.[85]
Most instances of verbal irony are labeled by research subjects as sarcastic, suggesting that the termsarcasmis more widely used than its technical definition suggests it should be.[86]Somepsycholinguistictheorists[87]suggest thatsarcasm,hyperbole,understatement,rhetorical questions,double entendre, andjocularityshould all be considered forms of verbal irony. The differences between these rhetorical devices (tropes) can be quite subtle and relate to typical emotional reactions of listeners, and the goals of the speakers. Regardless of the various ways theorists categorize figurative language types, people in conversation who are attempting to interpret speaker intentions and discourse goals do not generally identify the kinds of tropes used.[88]
The more general casual usage of irony meaning "a contradiction between circumstance and expectation" originated in the 1640s.[89][example needed]And it has always been applicable in situations where there is no double audience; something required of only certain types.
Some speakers of English complain that the wordsironyandironicare often misused;[90]the term is sometimes used as a synonym forincongruousand applied to "every trivial oddity".[91]The song"Ironic" by Alanis Morissettein 1996 is an often-pointed-to example. Meanwhile, Tim Conley cites the following:[92]
Philip Howardassembled a list of seven implied meanings for the word "ironically", as it opens a sentence:
|
https://en.wikipedia.org/wiki/Irony
|
Inmathematics, more specificallytopology, alocal homeomorphismis afunctionbetweentopological spacesthat, intuitively, preserves local (though not necessarily global) structure.
Iff:X→Y{\displaystyle f:X\to Y}is a local homeomorphism,X{\displaystyle X}is said to be anétale spaceoverY.{\displaystyle Y.}Local homeomorphisms are used in the study ofsheaves. Typical examples of local homeomorphisms arecovering maps.
A topological spaceX{\displaystyle X}islocally homeomorphictoY{\displaystyle Y}if every point ofX{\displaystyle X}has a neighborhood that ishomeomorphicto an open subset ofY.{\displaystyle Y.}For example, amanifoldof dimensionn{\displaystyle n}is locally homeomorphic toRn.{\displaystyle \mathbb {R} ^{n}.}
If there is a local homeomorphism fromX{\displaystyle X}toY,{\displaystyle Y,}thenX{\displaystyle X}is locally homeomorphic toY,{\displaystyle Y,}but the converse is not always true.
For example, the two dimensionalsphere, being a manifold, is locally homeomorphic to the planeR2,{\displaystyle \mathbb {R} ^{2},}but there is no local homeomorphismS2→R2.{\displaystyle S^{2}\to \mathbb {R} ^{2}.}
A functionf:X→Y{\displaystyle f:X\to Y}between twotopological spacesis called alocal homeomorphism[1]if every pointx∈X{\displaystyle x\in X}has anopen neighborhoodU{\displaystyle U}whoseimagef(U){\displaystyle f(U)}is open inY{\displaystyle Y}and therestrictionf|U:U→f(U){\displaystyle f{\big \vert }_{U}:U\to f(U)}is ahomeomorphism(where the respectivesubspace topologiesare used onU{\displaystyle U}and onf(U){\displaystyle f(U)}).
Local homeomorphisms versus homeomorphisms
Every homeomorphism is a local homeomorphism. But a local homeomorphism is a homeomorphism if and only if it isbijective.
A local homeomorphism need not be a homeomorphism. For example, the functionR→S1{\displaystyle \mathbb {R} \to S^{1}}defined byt↦eit{\displaystyle t\mapsto e^{it}}(so that geometrically, this map wraps thereal linearound thecircle) is a local homeomorphism but not a homeomorphism.
The mapf:S1→S1{\displaystyle f:S^{1}\to S^{1}}defined byf(z)=zn,{\displaystyle f(z)=z^{n},}which wraps the circle around itselfn{\displaystyle n}times (that is, haswinding numbern{\displaystyle n}), is a local homeomorphism for all non-zeron,{\displaystyle n,}but it is a homeomorphism only when it isbijective(that is, only whenn=1{\displaystyle n=1}orn=−1{\displaystyle n=-1}).
Generalizing the previous two examples, everycovering mapis a local homeomorphism; in particular, theuniversal coverp:C→Y{\displaystyle p:C\to Y}of a spaceY{\displaystyle Y}is a local homeomorphism.
In certain situations the converse is true. For example: ifp:X→Y{\displaystyle p:X\to Y}is aproperlocal homeomorphism between twoHausdorff spacesand ifY{\displaystyle Y}is alsolocally compact, thenp{\displaystyle p}is a covering map.
Local homeomorphisms and composition of functions
Thecompositionof two local homeomorphisms is a local homeomorphism; explicitly, iff:X→Y{\displaystyle f:X\to Y}andg:Y→Z{\displaystyle g:Y\to Z}are local homeomorphisms then the compositiong∘f:X→Z{\displaystyle g\circ f:X\to Z}is also a local homeomorphism.
The restriction of a local homeomorphism to any open subset of the domain will again be a local homomorphism; explicitly, iff:X→Y{\displaystyle f:X\to Y}is a local homeomorphism then its restrictionf|U:U→Y{\displaystyle f{\big \vert }_{U}:U\to Y}to anyU{\displaystyle U}open subset ofX{\displaystyle X}is also a local homeomorphism.
Iff:X→Y{\displaystyle f:X\to Y}is continuous while bothg:Y→Z{\displaystyle g:Y\to Z}andg∘f:X→Z{\displaystyle g\circ f:X\to Z}are local homeomorphisms, thenf{\displaystyle f}is also a local homeomorphism.
Inclusion maps
IfU⊆X{\displaystyle U\subseteq X}is any subspace (where as usual,U{\displaystyle U}is equipped with thesubspace topologyinduced byX{\displaystyle X}) then theinclusion mapi:U→X{\displaystyle i:U\to X}is always atopological embedding. But it is a local homeomorphism if and only ifU{\displaystyle U}is open inX.{\displaystyle X.}The subsetU{\displaystyle U}being open inX{\displaystyle X}is essential for the inclusion map to be a local homeomorphism because the inclusion map of a non-open subset ofX{\displaystyle X}neveryields a local homeomorphism (since it will not be an open map).
The restrictionf|U:U→Y{\displaystyle f{\big \vert }_{U}:U\to Y}of a functionf:X→Y{\displaystyle f:X\to Y}to a subsetU⊆X{\displaystyle U\subseteq X}is equal to its composition with the inclusion mapi:U→X;{\displaystyle i:U\to X;}explicitly,f|U=f∘i.{\displaystyle f{\big \vert }_{U}=f\circ i.}Since the composition of two local homeomorphisms is a local homeomorphism, iff:X→Y{\displaystyle f:X\to Y}andi:U→X{\displaystyle i:U\to X}are local homomorphisms then so isf|U=f∘i.{\displaystyle f{\big \vert }_{U}=f\circ i.}Thus restrictions of local homeomorphisms to open subsets are local homeomorphisms.
Invariance of domain
Invariance of domainguarantees that iff:U→Rn{\displaystyle f:U\to \mathbb {R} ^{n}}is acontinuousinjective mapfrom an open subsetU{\displaystyle U}ofRn,{\displaystyle \mathbb {R} ^{n},}thenf(U){\displaystyle f(U)}is open inRn{\displaystyle \mathbb {R} ^{n}}andf:U→f(U){\displaystyle f:U\to f(U)}is ahomeomorphism.
Consequently, a continuous mapf:U→Rn{\displaystyle f:U\to \mathbb {R} ^{n}}from an open subsetU⊆Rn{\displaystyle U\subseteq \mathbb {R} ^{n}}will be a local homeomorphism if and only if it is alocallyinjective map(meaning that every point inU{\displaystyle U}has aneighborhoodN{\displaystyle N}such that the restriction off{\displaystyle f}toN{\displaystyle N}is injective).
Local homeomorphisms in analysis
It is shown incomplex analysisthat a complexanalyticfunctionf:U→C{\displaystyle f:U\to \mathbb {C} }(whereU{\displaystyle U}is an open subset of thecomplex planeC{\displaystyle \mathbb {C} }) is a local homeomorphism precisely when thederivativef′(z){\displaystyle f^{\prime }(z)}is non-zero for allz∈U.{\displaystyle z\in U.}The functionf(x)=zn{\displaystyle f(x)=z^{n}}on an open disk around0{\displaystyle 0}is not a local homeomorphism at0{\displaystyle 0}whenn≥2.{\displaystyle n\geq 2.}In that case0{\displaystyle 0}is a point of "ramification" (intuitively,n{\displaystyle n}sheets come together there).
Using theinverse function theoremone can show that a continuously differentiable functionf:U→Rn{\displaystyle f:U\to \mathbb {R} ^{n}}(whereU{\displaystyle U}is an open subset ofRn{\displaystyle \mathbb {R} ^{n}}) is a local homeomorphism if the derivativeDxf{\displaystyle D_{x}f}is an invertible linear map (invertible square matrix) for everyx∈U.{\displaystyle x\in U.}(The converse is false, as shown by the local homeomorphismf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }withf(x)=x3{\displaystyle f(x)=x^{3}}).
An analogous condition can be formulated for maps betweendifferentiable manifolds.
Local homeomorphisms and fibers
Supposef:X→Y{\displaystyle f:X\to Y}is a continuousopensurjection between twoHausdorffsecond-countablespaces whereX{\displaystyle X}is aBaire spaceandY{\displaystyle Y}is anormal space. If everyfiberoff{\displaystyle f}is adiscrete subspaceofX{\displaystyle X}(which is a necessary condition forf:X→Y{\displaystyle f:X\to Y}to be a local homeomorphism) thenf{\displaystyle f}is aY{\displaystyle Y}-valued local homeomorphism on a dense open subset ofX.{\displaystyle X.}To clarify this statement's conclusion, letO=Of{\displaystyle O=O_{f}}be the (unique) largest open subset ofX{\displaystyle X}such thatf|O:O→Y{\displaystyle f{\big \vert }_{O}:O\to Y}is a local homeomorphism.[note 1]If everyfiberoff{\displaystyle f}is adiscrete subspaceofX{\displaystyle X}then this open setO{\displaystyle O}is necessarily adensesubsetofX.{\displaystyle X.}In particular, ifX≠∅{\displaystyle X\neq \varnothing }thenO≠∅;{\displaystyle O\neq \varnothing ;}a conclusion that may be false without the assumption thatf{\displaystyle f}'s fibers are discrete (see this footnote[note 2]for an example).
One corollary is that every continuous open surjectionf{\displaystyle f}betweencompletely metrizablesecond-countable spaces that hasdiscretefibers is "almost everywhere" a local homeomorphism (in the topological sense thatOf{\displaystyle O_{f}}is a dense open subset of its domain).
For example, the mapf:R→[0,∞){\displaystyle f:\mathbb {R} \to [0,\infty )}defined by the polynomialf(x)=x2{\displaystyle f(x)=x^{2}}is a continuous open surjection with discrete fibers so this result guarantees that the maximal open subsetOf{\displaystyle O_{f}}is dense inR;{\displaystyle \mathbb {R} ;}with additional effort (using theinverse function theoremfor instance), it can be shown thatOf=R∖{0},{\displaystyle O_{f}=\mathbb {R} \setminus \{0\},}which confirms that this set is indeed dense inR.{\displaystyle \mathbb {R} .}This example also shows that it is possible forOf{\displaystyle O_{f}}to be aproperdense subset off{\displaystyle f}'s domain.
Becauseevery fiber of every non-constant polynomial is finite(and thus a discrete, and even compact, subspace), this example generalizes to such polynomials whenever the mapping induced by it is an open map.[note 3]
Local homeomorphisms and Hausdorffness
There exist local homeomorphismsf:X→Y{\displaystyle f:X\to Y}whereY{\displaystyle Y}is aHausdorff spacebutX{\displaystyle X}is not.
Consider for instance thequotient spaceX=(R⊔R)/∼,{\displaystyle X=\left(\mathbb {R} \sqcup \mathbb {R} \right)/\sim ,}where theequivalence relation∼{\displaystyle \sim }on thedisjoint unionof two copies of the reals identifies every negative real of the first copy with the corresponding negative real of the second copy.
The two copies of0{\displaystyle 0}are not identified and they do not have any disjoint neighborhoods, soX{\displaystyle X}is not Hausdorff. One readily checks that the natural mapf:X→R{\displaystyle f:X\to \mathbb {R} }is a local homeomorphism.
The fiberf−1({y}){\displaystyle f^{-1}(\{y\})}has two elements ify≥0{\displaystyle y\geq 0}and one element ify<0.{\displaystyle y<0.}Similarly, it is possible to construct a local homeomorphismsf:X→Y{\displaystyle f:X\to Y}whereX{\displaystyle X}is Hausdorff andY{\displaystyle Y}is not: pick the natural map fromX=R⊔R{\displaystyle X=\mathbb {R} \sqcup \mathbb {R} }toY=(R⊔R)/∼{\displaystyle Y=\left(\mathbb {R} \sqcup \mathbb {R} \right)/\sim }with the same equivalence relation∼{\displaystyle \sim }as above.
A map is a local homeomorphism if and only if it iscontinuous,open, andlocally injective. In particular, every local homeomorphism is a continuous andopen map. Abijectivelocal homeomorphism is therefore a homeomorphism.
Whether or not a functionf:X→Y{\displaystyle f:X\to Y}is a local homeomorphism depends on its codomain. Theimagef(X){\displaystyle f(X)}of a local homeomorphismf:X→Y{\displaystyle f:X\to Y}is necessarily an open subset of its codomainY{\displaystyle Y}andf:X→f(X){\displaystyle f:X\to f(X)}will also be a local homeomorphism (that is,f{\displaystyle f}will continue to be a local homeomorphism when it is considered as the surjective mapf:X→f(X){\displaystyle f:X\to f(X)}onto its image, wheref(X){\displaystyle f(X)}has thesubspace topologyinherited fromY{\displaystyle Y}). However, in general it is possible forf:X→f(X){\displaystyle f:X\to f(X)}to be a local homeomorphism butf:X→Y{\displaystyle f:X\to Y}tonotbe a local homeomorphism (as is the case with the mapf:R→R2{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{2}}defined byf(x)=(x,0),{\displaystyle f(x)=(x,0),}for example). A mapf:X→Y{\displaystyle f:X\to Y}is a local homomorphism if and only iff:X→f(X){\displaystyle f:X\to f(X)}is a local homeomorphism andf(X){\displaystyle f(X)}is an open subset ofY.{\displaystyle Y.}
Everyfiberof a local homeomorphismf:X→Y{\displaystyle f:X\to Y}is adiscrete subspaceof itsdomainX.{\displaystyle X.}
A local homeomorphismf:X→Y{\displaystyle f:X\to Y}transfers "local" topological properties in both directions:
As pointed out above, the Hausdorff property is not local in this sense and need not be preserved by local homeomorphisms.
The local homeomorphisms withcodomainY{\displaystyle Y}stand in a natural one-to-one correspondence with thesheavesof sets onY;{\displaystyle Y;}this correspondence is in fact anequivalence of categories. Furthermore, every continuous map with codomainY{\displaystyle Y}gives rise to a uniquely defined local homeomorphism with codomainY{\displaystyle Y}in a natural way. All of this is explained in detail in the article onsheaves.
The idea of a local homeomorphism can be formulated in geometric settings different from that of topological spaces.
Fordifferentiable manifolds, we obtain thelocal diffeomorphisms; forschemes, we have theformally étale morphismsand theétale morphisms; and fortoposes, we get theétale geometric morphisms.
|
https://en.wikipedia.org/wiki/Local_homeomorphism
|
Incomputing,telecommunication,information theory, andcoding theory,forward error correction(FEC) orchannel coding[1][2][3]is a technique used forcontrolling errorsindata transmissionover unreliable or noisycommunication channels.
The central idea is that the sender encodes the message in aredundantway, most often by using anerror correction code, orerror correcting code(ECC).[4][5]The redundancy allows the receiver not only todetect errorsthat may occur anywhere in the message, but often to correct a limited number of errors. Therefore areverse channelto request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth.
The American mathematicianRichard Hammingpioneered this field in the 1940s and invented the first error-correcting code in 1950: theHamming (7,4) code.[5]
FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers inmulticast.
Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used inmodemsand incellular networks.
FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initialanalog-to-digital conversionin the receiver. TheViterbi decoderimplements asoft-decision algorithmto demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate abit-error rate(BER) signal which can be used as feedback to fine-tune the analog receiving electronics.
FEC information is added tomass storage(magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used asECCcomputer memoryon systems that require special provisions for reliability.
The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effectivesignal-to-noise ratio. Thenoisy-channel coding theoremofClaude Shannoncan be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems likepolar code[3]come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame.
ECC is accomplished by addingredundancyto the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output aresystematic, while those that do not arenon-systematic.
A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1)repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below.
This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is:
Though simple to implement and widely used, thistriple modular redundancyis a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits).
ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data.
Most telecommunication systems use a fixedchannel codedesigned to tolerate the expected worst-casebit error rate, and then fail to work at all if the bit error rate is ever worse.
However, some systems adapt to the given channel error conditions: some instances ofhybrid automatic repeat-requestuse a fixed ECC method as long as the ECC can handle the error rate, then switch toARQwhen the error rate gets too high;adaptive modulation and codinguses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.
The two main categories of ECC codes areblock codesandconvolutional codes.
There are many types of block codes;Reed–Solomon codingis noteworthy for its widespread use incompact discs,DVDs, andhard disk drives. Other examples of classical block codes includeGolay,BCH,Multidimensional parity, andHamming codes.
Hamming ECC is commonly used to correctNAND flashmemory errors.[6]This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliablesingle-level cell(SLC) NAND.
Densermulti-level cell(MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon.[7][8]NOR Flash typically does not use any error correction.[7]
Classical block codes are usually decoded usinghard-decisionalgorithms,[9]which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded usingsoft-decisionalgorithms like the Viterbi, MAP orBCJRalgorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Nearly all classical block codes apply the algebraic properties offinite fields. Hence classical block codes are often referred to as algebraic codes.
In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such asLDPC codeslack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates.
Mostforward error correctioncodes correct only bit-flips, but not bit-insertions or bit-deletions.
In this setting, theHamming distanceis the appropriate way to measure thebit error rate.
A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes.
TheLevenshtein distanceis a more appropriate way to measure the bit error rate when using such codes.[10]
The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code.
The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate.[11]In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection.
One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero:[12]His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement.
The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication.[13]
Classical (algebraic) block codes and convolutional codes are frequently combined inconcatenatedcoding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended.
Concatenated codes have been standard practice in satellite and deep space communications sinceVoyager 2first used the technique in its 1986 encounter withUranus. TheGalileocraft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna.
Low-density parity-check(LDPC) codes are a class of highly efficient linear block
codes made from many single parity check (SPC) codes. They can provide performance very close to thechannel capacity(the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel.
LDPC codes were first introduced byRobert G. Gallagerin his PhD thesis in 1960,
but due to the computational effort in implementing encoder and decoder and the introduction ofReed–Solomoncodes,
they were mostly ignored until the 1990s.
LDPC codes are now used in many recent high-speed communication standards, such asDVB-S2(Digital Video Broadcasting – Satellite – Second Generation),WiMAX(IEEE 802.16estandard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n),[14]10GBase-T Ethernet(802.3an) andG.hn/G.9960(ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within3GPPMBMS(seefountain codes).
Turbo codingis an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of theShannon limit. PredatingLDPC codesin terms of practical application, they now provide similar performance.
One of the earliest commercial applications of turbo coding was theCDMA2000 1x(TIA IS-2000) digital cellular technology developed byQualcommand sold byVerizon Wireless,Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access,1xEV-DO(TIA IS-856). Like 1x, EV-DO was developed byQualcomm, and is sold byVerizon Wireless,Sprint, and other carriers (Verizon's marketing name for 1xEV-DO isBroadband Access, Sprint's consumer and business marketing names for 1xEV-DO arePower VisionandMobile Broadband, respectively).
Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool incomputational complexity theory, e.g., for the design ofprobabilistically checkable proofs.
Locally decodable codesare error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions.Locally testable codesare error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
Not all testing codes are locally decoding and testing of codes
Not all locally decodable codes (LDCs) are locally testable codes (LTCs)[15]neither locally correctable codes (LCCs),[16]q-query LCCs are bounded exponentially[17][18]while LDCs can havesubexponentiallengths.[19][20]
Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Manycommunication channelsare not memoryless: errors typically occur inburstsrather than independently. If the number of errors within acode wordexceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a moreuniform distributionof errors.[21]Therefore, interleaving is widely used forburst error-correction.
The analysis of modern iterated codes, liketurbo codesandLDPC codes, typically assumes an independent distribution of errors.[22]Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word.[23]
For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance.[21][24]The iterative decoding algorithm works best when there are not short cycles in thefactor graphthat represents the decoder; the interleaver is chosen to avoid short cycles.
Interleaver designs include:
In multi-carriercommunication systems, interleaving across carriers may be employed to provide frequencydiversity, e.g., to mitigatefrequency-selective fadingor narrowband interference.[28]
Transmission without interleaving:
Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword cccc is altered in one bit and can be corrected, but the codeword dddd is altered in three bits, so either it cannot be decoded at all or it might bedecoded incorrectly.
With interleaving:
In each of the codewords "aaaa", "eeee", "ffff", and "gggg", only one bit is altered, so one-bit error-correcting code will decode everything correctly.
Transmission without interleaving:
The term "AnExample" ends up mostly unintelligible and difficult to correct.
With interleaving:
No word is completely lost and the missing letters can be recovered with minimal guesswork.
Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded.[29]Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver[citation needed]. An example of such an algorithm is based onneural network[30]structures.
Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: theCloud Radio Access Networks (C-RAN)in aSoftware-defined radio (SDR)context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system.
In this context, there are various available Open-source software listed below (non exhaustive).
|
https://en.wikipedia.org/wiki/Error-correcting_code
|
Initerative reconstructionindigital imaging,interior reconstruction(also known aslimited field of view (LFV)reconstruction) is a technique to correct truncation artifacts caused by limiting image data to a smallfield of view. The reconstruction focuses on an area known as the region of interest (ROI). Although interior reconstruction can be applied to dental or cardiacCTimages, the concept is not limited to CT. It is applied with one of several methods.
The purpose of each method is to solve for vectorx{\displaystyle x}in the following problem:
LetX{\displaystyle X}be the region of interest (ROI) andY{\displaystyle Y}be the region outside ofX{\displaystyle X}.
AssumeA{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are known matrices;x{\displaystyle x}andy{\displaystyle y}are unknown vectors of the original image, whilef{\displaystyle f}andg{\displaystyle g}are vector measurements of the responses (f{\displaystyle f}is known andg{\displaystyle g}is unknown).x{\displaystyle x}is inside regionX{\displaystyle X}, (x∈X{\displaystyle x\in X}) andy{\displaystyle y}, in the regionY{\displaystyle Y}, (y∈Y{\displaystyle y\in Y}), is outside regionX{\displaystyle X}.f{\displaystyle f}is inside a region in the measurement corresponding toX{\displaystyle X}. This region is denoted asF{\displaystyle F}, (f∈F{\displaystyle f\in F}), whileg{\displaystyle g}is outside of the regionF{\displaystyle F}. It corresponds toY{\displaystyle Y}and is denoted asG{\displaystyle G}, (g∈G{\displaystyle g\in G}).
For CT image-reconstruction purposes,C=0{\displaystyle C=0}.
To simplify the concept of interior reconstruction, the matricesA{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are applied to image reconstruction instead of complexoperators.
The first interior-reconstruction method listed below isextrapolation. It is a local tomography method which eliminates truncation artifacts but introduces another type of artifact: a bowl effect. An improvement is known as the adaptive extrapolation method, although the iterative extrapolation method below also improves reconstruction results. In some cases, the exact reconstruction can be found for the interior reconstruction. The local inverse method below modifies the local tomography method, and may improve the reconstruction result of the local tomography; the iterative reconstruction method can be applied to interior reconstruction. Among the above methods, extrapolation is often applied.
A{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are known matrices;x{\displaystyle x}andy{\displaystyle y}are unknown vectors;f{\displaystyle f}is a known vector, andg{\displaystyle g}is an unknown vector. We need to know the vectorx{\displaystyle x}.x{\displaystyle x}andy{\displaystyle y}are the original image, whilef{\displaystyle f}andg{\displaystyle g}are measurements of responses. Vectorx{\displaystyle x}is inside the region of interestX{\displaystyle X}, (x∈X{\displaystyle x\in X}). Vectory{\displaystyle y}is outside the regionX{\displaystyle X}. The outside region is calledY{\displaystyle Y}, (y∈Y{\displaystyle y\in Y}) andf{\displaystyle f}is inside a region in the measurement corresponding toX{\displaystyle X}. This region is denotedF{\displaystyle F}, (f∈F{\displaystyle f\in F}). The region of vectorg{\displaystyle g}(outside the regionF{\displaystyle F}) also corresponds toY{\displaystyle Y}and is denoted asG{\displaystyle G}, (g∈G{\displaystyle g\in G}).
In CT image reconstruction, it has
To simplify the concept of interior reconstruction, the matricesA{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are applied to image reconstruction instead of a complex operator.
The response in the outside region can be a guessG{\displaystyle G}; for example, assume it isgex{\displaystyle g_{ex}}
A solution ofx{\displaystyle x}is written asx0{\displaystyle x_{0}}, and is known as the extrapolation method. The result depends on how good the extrapolation functiongex{\displaystyle g_{ex}}is. A frequent choice is
at the boundary of the two regions.[1][2][3][4]The extrapolation method is often combined witha prioriknowledge,[5][6]and an extrapolation method which reduces calculation time is shown below.
Assume a rough solution,x0{\displaystyle x_{0}}andy0{\displaystyle y_{0}}, is obtained from the extrapolation method described above. The response in the outside regiong1{\displaystyle g_{1}}can be calculated as follows:
The reconstructed image can be calculated as follows:
It is assumed that
at the boundary of the interior region;x1{\displaystyle x_{1}}solves the problem, and is known as the adaptive extrapolation method.g1ex{\displaystyle g_{1ex}}is the adaptive extrapolation function.[7][8][9][10][5]
It is assumed that a rough solution,x0{\displaystyle x_{0}}andy0{\displaystyle y_{0}}, is obtained from the extrapolation method described below:
or
The reconstruction can be obtained as
Hereg1ex{\displaystyle g_{1ex}}is an extrapolation function, and it is assumed that
x1{\displaystyle x_{1}}is one solution of this problem.[11]
Local tomography, with a very short filter, is also known as lambda tomography.[12][13]
The local inverse method extends the concept of local tomography. The response in the outside region can be calculated as follows:
Consider the generalized inverseB+{\displaystyle B^{+}}satisfying
Define
so that
Hence,
The above equation can be solved as
considering that
Q{\displaystyle Q}is the generalized inverse ofQ{\displaystyle Q}, i.e.
The solution can be simplified as
The matrixA+Q=A+[I−BB+]{\displaystyle A^{+}Q=A^{+}[I-BB^{+}]}is known as the local inverse ofmatrix[ABCD]{\displaystyle {\begin{bmatrix}A&B\\C&D\\\end{bmatrix}}}, corresponding toA{\displaystyle A}. This is known as the local inverse method.[11]
Here a goal function is defined, and this method iteratively achieves the goal. If the goal function can be some kind of normal, this is known as the minimal norm method.
subject to
andf{\displaystyle f}is known,
whereR{\displaystyle R},S{\displaystyle S}andT{\displaystyle T}are weighting constants of the minimization and‖⋅‖{\displaystyle \|\cdot \|}is some kind ofnorm. Often-used norms areL0{\displaystyle L_{0}},L1{\displaystyle L_{1}},L2{\displaystyle L_{2}},L+∞{\displaystyle L_{+\infty }}total variation(TV) norm or a combination of the above norms. An example of this method is the projection onto convex sets (POCS) method.[14][15]
In special situations, the interior reconstruction can be obtained as an analytical solution; the solution ofx{\displaystyle x}is exact in such cases.[16][17][18]
Extrapolated data oftenconvolutesto akernel function. After data is extrapolated its size is increasedNtimes, whereN= 2 ~ 3. If the data needs to be convoluted to a known kernel function, the numerical calculations will increase log(N)·Ntimes, even with thefast Fourier transform(FFT). Analgorithmexists, analytically calculating the contribution from part of the extrapolated data. The calculation time can be omitted, compared to the original convolution calculation; with this algorithm, the calculation of a convolution using the extrapolated data is not noticeably increased. This is known as fast extrapolation.[19]
The extrapolation method is suitable in a situation where
The adaptive extrapolation method is suitable for a situation where
The iterative extrapolation method is suitable for a situation in which
Local tomography is suitable for a situation in which
The local inverse method, identical to local tomography, suitable in a situation in which
The iterative reconstruction method obtains a good result with large calculations. Although the analytic method achieves an exact result, it is only functional in some situations. The fast extrapolation method can get the same results as the other extrapolation methods, and can be applied to the above interior reconstruction methods to reduce the calculation.
|
https://en.wikipedia.org/wiki/Interior_reconstruction
|
Determining the number of clusters in adata set, a quantity often labelledkas in thek-means algorithm, is a frequent problem indata clustering, and is a distinct issue from the process of actually solving the clustering problem.
For a certain class ofclustering algorithms(in particulark-means,k-medoidsandexpectation–maximization algorithm), there is a parameter commonly referred to askthat specifies the number of clusters to detect. Other algorithms such asDBSCANandOPTICS algorithmdo not require the specification of this parameter;hierarchical clusteringavoids the problem altogether.
The correct choice ofkis often ambiguous, with interpretations depending on the shape and scale of the distribution of points in a data set and the desired clustering resolution of the user. In addition, increasingkwithout penalty will always reduce the amount of error in the resulting clustering, to the extreme case of zero error if each data point is considered its own cluster (i.e., whenkequals the number of data points,n). Intuitively then,the optimal choice ofkwill strike a balance between maximum compression of the data using a single cluster, and maximum accuracy by assigning each data point to its own cluster. If an appropriate value ofkis not apparent from prior knowledge of the properties of the data set, it must be chosen somehow. There are several categories of methods for making this decision.
Theelbow methodlooks at the percentage ofexplained varianceas a function of the number of clusters:
One should choose a number of clusters so that adding another cluster does not give much better modeling of the data.
More precisely, if one plots the percentage of variance explained by the clusters against the number of clusters, the first clusters will add much information (explain a lot of variance), but at some point the marginal gain will drop, giving an angle in the graph. The number of clusters is chosen at this point, hence the "elbow criterion".
In most datasets, this "elbow" is ambiguous,[1]making this method subjective and unreliable. Because the scale of the axes is arbitrary, the concept of an angle is not well-defined, and even on uniform random data, the curve produces an "elbow", making the method rather unreliable.[2]Percentage of variance explained is the ratio of the between-group variance to the total variance, also known as anF-test. A slight variation of this method plots the curvature of the within group variance.[3]
The method can be traced to speculation byRobert L. Thorndikein 1953.[4]While the idea of the elbow method sounds simple and straightforward, other methods (as detailed below) give better results.
In statistics anddata mining,X-means clusteringis a variation ofk-means clusteringthat refines cluster assignments by repeatedly attempting subdivision, and keeping the best resulting splits, until a criterion such as theAkaike information criterion(AIC) orBayesian information criterion(BIC) is reached.[5]
Another set of methods for determining the number of clusters are information criteria, such as theAkaike information criterion(AIC),Bayesian information criterion(BIC), or thedeviance information criterion(DIC) — if it is possible to make alikelihood functionfor the clustering model.
For example: Thek-means model is "almost" aGaussian mixture modeland one can construct a likelihood for the Gaussian mixture model and thus also determine information criterion values.[6]
Rate distortion theoryhas been applied to choosingkcalled the "jump" method, which determines the number of clusters that maximizes efficiency while minimizing error byinformation-theoreticstandards.[7]The strategy of the algorithm is to generate a distortion curve for the input data by running a standard clustering algorithm such ask-meansfor all values ofkbetween 1 andn, and computing the distortion (described below) of the resulting clustering. The distortion curve is then transformed by a negative power chosen based on thedimensionalityof the data. Jumps in the resulting values then signify reasonable choices fork, with the largest jump representing the best choice.
The distortion of a clustering of some input data is formally defined as follows: Let the data set be modeled as ap-dimensionalrandom variable,X, consisting of amixture distributionofGcomponents with commoncovariance,Γ. If we letc1…cK{\displaystyle c_{1}\ldots c_{K}}be a set ofKcluster centers, withcX{\displaystyle c_{X}}the closest center to a given sample ofX, then the minimum average distortion per dimension when fitting theKcenters to the data is:
This is also the averageMahalanobis distanceper dimension betweenXand the closest cluster centercX{\displaystyle c_{X}}. Because the minimization over all possible sets of cluster centers is prohibitively complex, the distortion is computed in practice by generating a set of cluster centers using a standard clustering algorithm and computing the distortion using the result. The pseudo-code for the jump method with an input set ofp-dimensional data pointsXis:
The choice of the transform powerY=(p/2){\displaystyle Y=(p/2)}is motivated byasymptotic reasoningusing results from rate distortion theory. Let the dataXhave a single, arbitrarilyp-dimensionalGaussian distribution, and let fixedK=⌊αp⌋{\displaystyle K=\lfloor \alpha ^{p}\rfloor }, for someαgreater than zero. Then the distortion of a clustering ofKclusters in thelimitaspgoes to infinity isα−2{\displaystyle \alpha ^{-2}}. It can be seen that asymptotically, the distortion of a clustering to the power(−p/2){\displaystyle (-p/2)}is proportional toαp{\displaystyle \alpha ^{p}}, which by definition is approximately the number of clustersK. In other words, for a single Gaussian distribution, increasingKbeyond the true number of clusters, which should be one, causes a linear growth in distortion. This behavior is important in the general case of a mixture of multiple distribution components.
LetXbe a mixture ofGp-dimensional Gaussian distributions with common covariance. Then for any fixedKless thanG, the distortion of a clustering aspgoes to infinity is infinite. Intuitively, this means that a clustering of less than the correct number of clusters is unable to describe asymptotically high-dimensional data, causing the distortion to increase without limit. If, as described above,Kis made an increasing function ofp, namely,K=⌊αp⌋{\displaystyle K=\lfloor \alpha ^{p}\rfloor }, the same result as above is achieved, with the value of the distortion in the limit aspgoes to infinity being equal toα−2{\displaystyle \alpha ^{-2}}. Correspondingly, there is the same proportional relationship between the transformed distortion and the number of clusters,K.
Putting the results above together, it can be seen that for sufficiently high values ofp, the transformed distortiondK−p/2{\displaystyle d_{K}^{-p/2}}is approximately zero forK<G, then jumps suddenly and begins increasing linearly forK≥G. The jump algorithm for choosingKmakes use of these behaviors to identify the most likely value for the true number of clusters.
Although the mathematical support for the method is given in terms of asymptotic results, the algorithm has beenempiricallyverified to work well in a variety of data sets with reasonable dimensionality. In addition to the localized jump method described above, there exists a second algorithm for choosingKusing the same transformed distortion values known as the broken line method. The broken line method identifies the jump point in the graph of the transformed distortion by doing a simpleleast squareserror line fit of two line segments, which in theory will fall along thex-axis forK<G, and along the linearly increasing phase of the transformed distortion plot forK≥G. The broken line method is more robust than the jump method in that its decision is global rather than local, but it also relies on the assumption of Gaussian mixture components, whereas the jump method is fullynon-parametricand has been shown to be viable for general mixture distributions.
The averagesilhouetteof the data is another useful criterion for assessing the natural number of clusters. The silhouette of a data instance is a measure of how closely it is matched to data within its cluster and how loosely it is matched to data of the neighboring cluster, i.e., the cluster whose average distance from the datum is lowest.[8]A silhouette close to 1 implies the datum is in an appropriate cluster, while a silhouette close to −1 implies the datum is in the wrong cluster. Optimization techniques such asgenetic algorithmsare useful in determining the number of clusters that gives rise to the largest silhouette.[9]It is also possible to re-scale the data in such a way that the silhouette is more likely to be maximized at the correct number of clusters.[10]
One can also use the process ofcross-validationto analyze the number of clusters. In this process, the data is partitioned intovparts. Each of the parts is then set aside at turn as a test set, a clustering model computed on the otherv− 1 training sets, and the value of the objective function (for example, the sum of the squared distances to the centroids fork-means) calculated for the test set. Thesevvalues are calculated and averaged for each alternative number of clusters, and the cluster number selected such that further increase in number of clusters leads to only a small reduction in the objective function.[citation needed]
When clustering text databases with the cover coefficient on a document collection defined by a document by term Dmatrix(of size m×n, where m is the number of documents and n is the number of terms), the number of clusters can roughly be estimated by the formulamnt{\displaystyle {\tfrac {mn}{t}}}where t is the number of non-zero entries in D. Note that in D each row and each column must contain at least one non-zero element.[11]
Kernel matrix defines the proximity of the input information. For example, in Gaussianradial basis function, it determines thedot productof the inputs in a higher-dimensional space, calledfeature space. It is believed that the data become more linearly separable in the feature space, and hence, linear algorithms can be applied on the data with a higher success.
The kernel matrix can thus be analyzed in order to find the optimal number of clusters.[12]The method proceeds by the eigenvalue decomposition of the kernel matrix. It will then analyze the eigenvalues and eigenvectors to obtain a measure of the compactness of the input distribution. Finally, a plot will be drawn, where the elbow of that plot indicates the optimal number of clusters in the data set. Unlike previous methods, this technique does not need to perform any clustering a-priori. It directly finds the number of clusters from the data.
Robert Tibshirani, Guenther Walther, andTrevor Hastieproposed estimating the number of clusters in a data set via the gap statistic.[13]The gap statistics, based on theoretical grounds, measures how far is the pooled within-cluster sum of squares around the cluster centers from the sum of squares expected under the null reference distribution of data.
The expected value is estimated by simulating null reference data of characteristics of the original data, but lacking any clusters in it.
The optimal number of clusters is then estimated as the value ofkfor which the observed sum of squares falls farthest below the null reference.
Unlike many previous methods, the gap statistics can tell us that there is no value ofkfor which there is a good clustering, but the reliability depends on how plausible the assumed null distribution (e.g., a uniform distribution) is on the given data. This tends to work well in synthetic settings, but cannot handle difficult data sets with, e.g., uninformative attributes well because it assumes all attributes to be equally important.[14]
The gap statistics is implemented as theclusGapfunction in theclusterpackage[15]inR.
|
https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set
|
Inmathematics, theKleene–Rosser paradoxis a paradox that shows that certain systems offormal logicareinconsistent, in particular the version ofHaskell Curry'scombinatory logicintroduced in 1930, andAlonzo Church's originallambda calculus, introduced in 1932–1933, both originally intended as systems of formal logic. The paradox was exhibited byStephen KleeneandJ. B. Rosserin 1935.
Kleene and Rosser were able to show that both systems are able to characterize and enumerate their provably total, definable number-theoretic functions, which enabled them to construct a term that essentially replicatesRichard's paradoxin formal language.
Curry later managed to identify the crucial ingredients of the calculi that allowed the construction of this paradox, and used this to construct a much simpler paradox, now known asCurry's paradox.
Thismathematical logic-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Kleene%E2%80%93Rosser_paradox
|
Anetwork partitionis a division of a computer network into relatively independentsubnets, either by design, to optimize them separately, or due to the failure of network devices. Distributed software must be designed to be partition-tolerant, that is, even after the network is partitioned, it still works correctly.
For example, in a network with multiple subnets where nodes A and B are located in one subnet and nodes C and D are in another, a partition occurs if thenetwork switchdevice between the two subnets fails. In that case nodes A and B can no longer communicate with nodes C and D, but all nodes A-D work the same as before.
TheCAP theoremis based on three trade-offs:consistency,availability, and partition tolerance. Partition tolerance, in this context, means the ability of a data processing system to continue processing data even if a network partition causes communication errors between subsystems.[1]
|
https://en.wikipedia.org/wiki/Network_partition
|
TheGlobal System for Mobile Communications(GSM) is a family of standards to describe the protocols for second-generation (2G) digitalcellular networks,[2]as used by mobile devices such asmobile phonesandmobile broadband modems. GSM is also atrade markowned by theGSM Association.[3]"GSM" may also refer to the voice codec initially used in GSM.[4]
2G networks developed as a replacement for first generation (1G) analog cellular networks. The original GSM standard, which was developed by theEuropean Telecommunications Standards Institute(ETSI), originally described a digital, circuit-switched network optimized forfull duplexvoicetelephony, employingtime division multiple access(TDMA) between stations. This expanded over time to includedata communications, first bycircuit-switched transport, then bypacketdata transport via its upgraded standards,GPRSand thenEDGE. GSM exists in various versions based on thefrequency bands used.
GSM was first implemented inFinlandin December 1991.[5]It became the global standard for mobile cellular communications, with over 2 billion GSM subscribers globally in 2006, far above its competing standard,CDMA.[6]Its share reached over 90% market share by the mid-2010s, and operating in over 219 countries and territories.[2]The specifications and maintenance of GSM passed over to the3GPPbody in 2000,[7]which at the time developed third-generation (3G)UMTSstandards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation5Gstandards, which do not form part of the GSM standard. Beginning in the late 2010s, various carriers worldwidestarted to shut down their GSM networks; nevertheless, as a result of the network's widespread use, the acronym "GSM" is still used as a generic term for the plethora ofGmobile phone technologies evolved from it or mobile phones itself.
In 1983, work began to develop a European standard for digital cellular voice telecommunications when theEuropean Conference of Postal and Telecommunications Administrations(CEPT) set up theGroupe Spécial Mobile(GSM) committee and later provided a permanent technical-support group based inParis. Five years later, in 1987, 15 representatives from 13 European countries signed amemorandum of understandinginCopenhagento develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard.[8]The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.[9][10][11][12]
In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers fromthe four big EU countries[clarification needed]cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSMMoUwas tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK),Philippe Dupuis(France), and Renzo Failli (Italy).[13]In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to theEuropean Telecommunications Standards Institute(ETSI).[10][11][12]The IEEE/RSE awarded toThomas HaugandPhilippe Dupuisthe 2018James Clerk Maxwell medalfor their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication".[14]The GSM (2G) has evolved into 3G, 4G and 5G.
In parallelFranceandGermanysigned a joint development agreement in 1984 and were joined byItalyand theUKin 1986. In 1986, theEuropean Commissionproposed reserving the 900 MHz spectrum band for GSM. It was long believed that the formerFinnishprime ministerHarri Holkerimade the world's first GSM call on 1 July 1991, callingKaarina Suonio(deputy mayor of the city ofTampere) using a network built byNokia and SiemensandoperatedbyRadiolinja.[15]In 2021 a former Nokia engineerPekka Lonkarevealed toHelsingin Sanomatmaking a test call just a couple of hours earlier. "World's first GSM call was actually made by me. I called Marjo Jousinen, in Salo.", Lonka informed.[16]The following year saw the sending of the firstshort messaging service(SMS or "text message") message, andVodafone UKand Telecom Finland signed the first internationalroamingagreement.
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called the DCS 1800. Also that year,Telstrabecame the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSMmobile phonebecame available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, theGSM Associationformed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.[11]
In 2000 the first commercialGeneral Packet Radio Service(GPRS) services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the firstMultimedia Messaging Service(MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational.Enhanced Data rates for GSM Evolution(EDGE) services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.[11]
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the firstHSDPA-capable network also became operational. The firstHSUPAnetwork launched in 2007. (High Speed Packet Access(HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.[11]
TheGSM Associationestimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.[17]
GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3GUniversal Mobile Telecommunications System(UMTS),code-division multiple access(CDMA) technology, nor the 4G LTEorthogonal frequency-division multiple access(OFDMA) technology standards issued by the 3GPP.[18]
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.[19]
TelstrainAustraliashut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network.[20]The second mobile provider to shut down its GSM network (on 1 January 2017) wasAT&T Mobilityfrom theUnited States.[21]OptusinAustraliacompleted the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network coveringWestern Australiaand theNorthern Territoryhad earlier in the year been shut down in April 2017.[22]Singaporeshut down 2G services entirely in April 2017.[23]
The network is structured into several discrete sections:
GSM utilizes acellular network, meaning thatcell phonesconnect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where thebase-stationantennais installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential orsmall-businessenvironments and connect to atelecommunications service provider's network via abroadband-internetconnection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height,antenna gain, andpropagationconditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell,[24]where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and thetiming advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or anindoor repeaterwith distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM networks operate in a number of differentcarrier frequencyranges (separated intoGSM frequency rangesfor 2G andUMTS frequency bandsfor 3G), with most2GGSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most3Gnetworks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, seeGSM frequency bands.
Regardless of the frequency selected by an operator, it is divided intotimeslotsfor individual phones. This allows eight full-rate or sixteen half-rate speech channels perradio frequency. These eight radio timeslots (orburstperiods) are grouped into aTDMAframe. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all8 channelsis270.833 kbit/s,and the frame duration is4.615 ms.[25]TDMA noise is interference that can be heard on speakers near a GSM phone using TDMA, audible as a buzzing sound.[26]
The transmission power in the handset is limited to a maximum of 2 watts inGSM 850/900and1 wattinGSM 1800/1900.
GSM has used a variety of voicecodecsto squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, calledHalf Rate(6.5 kbit/s) andFull Rate(13 kbit/s). These used a system based onlinear predictive coding(LPC). In addition to being efficient withbitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997[27]with theenhanced full rate(EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development ofUMTS, EFR was refactored into a variable-rate codec calledAMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
One of the key features of GSM is theSubscriber Identity Module, commonly known as aSIM card. The SIM is a detachablesmart card[3]containing a user's subscription information and phone book. This allows users to retain their information after switching handsets. Alternatively, users can change networks or network identities without switching handsets - simply by changing the SIM.
Sometimesmobile network operatorsrestrict handsets that they sell for exclusive use in their own network. This is calledSIM lockingand is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g.BrazilandGermany) all phones are sold unlocked due to the abundance of dual-SIM handsets and operators.[28]
GSM was intended to be a secure wireless system. It has considered the user authentication using apre-shared keyandchallenge–response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.[29]
Research findings indicate that GSM faces susceptibility to hacking byscript kiddies, a term referring to inexperienced individuals utilizing readily available hardware and software. The vulnerability arises from the accessibility of tools such as a DVB-T TV tuner, posing a threat to both mobile and network users. Despite the term "script kiddies" implying a lack of sophisticated skills, the consequences of their attacks on GSM can be severe, impacting the functionality ofcellular networks. Given that GSM continues to be the main source of cellular technology in numerous countries, its susceptibility to potential threats from malicious attacks is one that needs to be addressed.[30]
The development ofUMTSintroduced an optionalUniversal Subscriber Identity Module(USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and nonon-repudiation.
GSM uses several cryptographic algorithms for security. TheA5/1,A5/2, andA5/3stream ciphersare used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with aciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to useFPGAsthat allow A5/1 to be broken with arainbow tableattack.[31]The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and theircryptanalysishas been revealed in the literature. As an example,Karsten Nohldeveloped a number ofrainbow tables(static values which reduce the time needed to carry out an attack) and have found new sources forknown plaintext attacks.[32]He said that it is possible to build "a full GSM interceptor... from open-source components" but that they had not done so because of legal concerns.[33]Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen tovoicemail, make calls, or send text messages using a seven-year-oldMotorolacellphone and decryption software available for free online.[34]
GSM usesGeneral Packet Radio Service(GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.[35]
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software forsniffingGPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g.,Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used withUSIMto prevent connections to fake base stations anddowngrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it was not intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.[36][37][38]
The GSM systems and services are described in a set of standards governed byETSI, where a full list is maintained.[39]
Severalopen-source softwareprojects exist that provide certain GSM features,[40]such as abase transceiver stationbyOpenBTSdevelops aBase transceiver stationand theOsmocomstack providing various parts.[41]
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.[citation needed]
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patentterm adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whetherOpenBTSwill be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. As of 2011[update], there have been no lawsuits against users of OpenBTS over GSM use.[citation needed]
|
https://en.wikipedia.org/wiki/GSM#Security
|
Inmathematics, more specificallydifferential algebra, ap-derivation(forpaprime number) on aringR, is a mapping fromRtoRthat satisfies certain conditions outlined directly below. The notion of ap-derivationis related to that of aderivationin differential algebra.
Letpbe a prime number. Ap-derivationor Buium derivative on a ringR{\displaystyle R}is a mapδ:R→R{\displaystyle \delta :R\to R}that satisfies the following "product rule":
and "sum rule":
as well as
Note that in the "sum rule" we are not really dividing byp, since all the relevantbinomial coefficientsin the numerator are divisible byp, so this definition applies in the case whenR{\displaystyle R}hasp-torsion.
A mapσ:R→R{\displaystyle \sigma :R\to R}is a lift of theFrobenius endomorphismprovidedσ(x)=xp(modpR){\displaystyle \sigma (x)=x^{p}{\pmod {pR}}}. An example of such a lift could come from theArtin map.
If(R,δ){\displaystyle (R,\delta )}is a ring with ap-derivation, then the mapσ(x):=xp+pδ(x){\displaystyle \sigma (x):=x^{p}+p\delta (x)}defines a ringendomorphismwhich is a lift of the Frobenius endomorphism. When the ringRisp-torsionfree the correspondence is abijection.
The quotient is well-defined because ofFermat's little theorem.
defines ap-derivation.
|
https://en.wikipedia.org/wiki/P-derivation
|
Inmathematics,operator theoryis the study oflinear operatorsonfunction spaces, beginning withdifferential operatorsandintegral operators. The operators may be presented abstractly by their characteristics, such asbounded linear operatorsorclosed operators, and consideration may be given tononlinear operators. The study, which depends heavily on thetopologyof function spaces, is a branch offunctional analysis.
If a collection of operators forms analgebra over a field, then it is anoperator algebra. The description of operator algebras is part of operator theory.
Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification ofnormal operatorsin terms of theirspectrafalls into this category.
Thespectral theoremis any of a number of results aboutlinear operatorsor aboutmatrices.[1]In broad terms the spectraltheoremprovides conditions under which anoperatoror a matrix can bediagonalized(that is, represented as adiagonal matrixin some basis). This concept of diagonalization is relatively straightforward for operators onfinite-dimensionalspaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class oflinear operatorsthat can be modelled bymultiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement aboutcommutativeC*-algebras. See alsospectral theoryfor a historical perspective.
Examples of operators to which the spectral theorem applies areself-adjoint operatorsor more generallynormal operatorsonHilbert spaces.
The spectral theorem also provides acanonicaldecomposition, called thespectral decomposition,eigenvalue decomposition, oreigendecomposition, of the underlying vector space on which the operator acts.
Anormal operatoron acomplexHilbert spaceH{\displaystyle H}is acontinuouslinear operatorN:H→H{\displaystyle N\colon H\rightarrow H}thatcommuteswith itshermitian adjointN∗{\displaystyle N^{\ast }}, that is:NN∗=N∗N{\displaystyle NN^{\ast }=N^{\ast }N}.[2]
Normal operators are important because thespectral theoremholds for them. Today, the class of normal operators is well understood. Examples of normal operators are
The spectral theorem extends to a more general class of matrices. LetA{\displaystyle A}be an operator on a finite-dimensionalinner product space.A{\displaystyle A}is said to benormalifA∗A=AA∗{\displaystyle A^{\ast }A=AA^{\ast }}. One can show thatA{\displaystyle A}is normal if and only if it is unitarily diagonalizable: By theSchur decomposition, we haveA=UTU∗{\displaystyle A=UTU^{\ast }}, whereU{\displaystyle U}is unitary andT{\displaystyle T}upper triangular. SinceA{\displaystyle A}is normal,T∗T=TT∗{\displaystyle T^{\ast }T=TT^{\ast }}. Therefore,T{\displaystyle T}must be diagonal since normal upper triangular matrices are diagonal. The converse is obvious.
In other words,A{\displaystyle A}is normal if and only if there exists aunitary matrixU{\displaystyle U}such thatA=UDU∗{\displaystyle A=UDU^{*}}whereD{\displaystyle D}is adiagonal matrix. Then, the entries of the diagonal ofD{\displaystyle D}are theeigenvaluesofA{\displaystyle A}. The column vectors ofU{\displaystyle U}are theeigenvectorsofA{\displaystyle A}and they areorthonormal. Unlike the Hermitian case, the entries ofD{\displaystyle D}need not be real.
Thepolar decompositionof anybounded linear operatorAbetween complexHilbert spacesis a canonical factorization as the product of apartial isometryand a non-negative operator.[3]
The polar decomposition for matrices generalizes as follows: ifAis a bounded linear operator then there is a unique factorization ofAas a productA=UPwhereUis a partial isometry,Pis a non-negative self-adjoint operator and the initial space ofUis the closure of the range ofP.
The operatorUmust be weakened to a partial isometry, rather than unitary, because of the following issues. IfAis theone-sided shiftonl2(N), then |A| = (A*A)1/2=I. So ifA=U|A|,Umust beA, which is not unitary.
The existence of a polar decomposition is a consequence ofDouglas' lemma:
Lemma—IfA,Bare bounded operators on a Hilbert spaceH, andA*A≤B*B, then there exists a contractionCsuch thatA=CB. Furthermore,Cis unique ifKer(B*) ⊂Ker(C).
The operatorCcan be defined byC(Bh) =Ah, extended by continuity to the closure ofRan(B), and by zero on the orthogonal complement ofRan(B). The operatorCis well-defined sinceA*A≤B*BimpliesKer(B) ⊂ Ker(A). The lemma then follows.
In particular, ifA*A=B*B, thenCis a partial isometry, which is unique ifKer(B*) ⊂ Ker(C).In general, for any bounded operatorA,A∗A=(A∗A)12(A∗A)12,{\displaystyle A^{*}A=(A^{*}A)^{\frac {1}{2}}(A^{*}A)^{\frac {1}{2}},}where (A*A)1/2is the unique positive square root ofA*Agiven by the usualfunctional calculus. So by the lemma, we haveA=U(A∗A)12{\displaystyle A=U(A^{*}A)^{\frac {1}{2}}}for some partial isometryU, which is unique if Ker(A) ⊂ Ker(U). (NoteKer(A) = Ker(A*A) = Ker(B) = Ker(B*), whereB=B*= (A*A)1/2.) TakePto be (A*A)1/2and one obtains the polar decompositionA=UP. Notice that an analogous argument can be used to showA = P'U', whereP'is positive andU'a partial isometry.
WhenHis finite dimensional,Ucan be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version ofsingular value decomposition.
By property of thecontinuous functional calculus, |A| is in theC*-algebragenerated byA. A similar but weaker statement holds for the partial isometry: the polar partUis in thevon Neumann algebragenerated byA. IfAis invertible,Uwill be in theC*-algebragenerated byAas well.
Many operators that are studied are operators on Hilbert spaces ofholomorphic functions, and the study
of the operator is intimately linked to questions in function theory.
For example,Beurling's theoremdescribes theinvariant subspacesof the unilateral shift in terms of inner functions, which are bounded holomorphic functions on the unit disk with unimodular boundary values almost everywhere on the circle. Beurling interpreted the unilateral shift as multiplication by the independent variable on theHardy space.[4]The success in studying multiplication operators, and more generallyToeplitz operators(which are multiplication, followed by projection onto the Hardy space) has inspired the study of similar questions on other spaces, such as theBergman space.
The theory ofoperator algebrasbringsalgebrasof operators such asC*-algebrasto the fore.
A C*-algebra,A, is aBanach algebraover the field ofcomplex numbers, together with amap* :A→A. One writesx*for the image of an elementxofA. The map * has the following properties:[5]
Remark.The first three identities say thatAis a*-algebra. The last identity is called theC* identityand is equivalent to:‖xx∗‖=‖x‖2,{\displaystyle \|xx^{*}\|=\|x\|^{2},}
The C*-identity is a very strong requirement. For instance, together with thespectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:‖x‖2=‖x∗x‖=sup{|λ|:x∗x−λ1is not invertible}.{\displaystyle \|x\|^{2}=\|x^{*}x\|=\sup\{|\lambda |:x^{*}x-\lambda \,1{\text{ is not invertible}}\}.}
|
https://en.wikipedia.org/wiki/Operator_theory
|
Processor designis a subfield ofcomputer scienceandcomputer engineering(fabrication) that deals with creating aprocessor, a key component ofcomputer hardware.
The design process involves choosing aninstruction setand a certain execution paradigm (e.g.VLIWorRISC) and results in amicroarchitecture, which might be described in e.g.VHDLorVerilog. Formicroprocessordesign, this description is then manufactured employing some of the varioussemiconductor device fabricationprocesses, resulting in adiewhich is bonded onto achip carrier. This chip carrier is then soldered onto, or inserted into asocketon, aprinted circuit board(PCB).
The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values usingregisters, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow.
Processor designs are often tested and validated on one or several FPGAs before sending the design of the processor to a foundry forsemiconductor fabrication.[1]
CPU design is divided into multiple components. Information is transferred throughdatapaths(such asALUsandpipelines). These datapaths are controlled through logic bycontrol units.Memorycomponents includeregister filesandcachesto retain information, or certain actions.Clock circuitrymaintains internal rhythms and timing through clock drivers,PLLs, andclock distribution networks. Pad transceiver circuitry which allows signals to be received and sent and alogic gatecelllibrarywhich is used to implement the logic. Logic gates are the foundation for processor design as they are used to implement most of the processor's components.[2]
CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency,power-dissipation, and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them asintellectual property. Control logic implementation techniques (logic synthesisusing CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic,finite-state machines,microprogramming(common from 1965 to 1985), andProgrammable logic arrays(common in the 1980s, no longer common).
Device types used to implement the logic include:
A CPU design project generally has these major tasks:
Re-designing a CPU core to a smaller die area helps to shrink everything (a "photomaskshrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have lessparasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within onevery-large-scale integrationchip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost.
As with most complex electronic designs, thelogic verificationeffort (proving that the design does not have bugs) now dominates the project schedule of a CPU.
Key CPU architectural innovations includeindex register,cache,virtual memory,instruction pipelining,superscalar,CISC,RISC,virtual machine,emulators,microprogram, andstack.
A variety ofnew CPU design ideashave been proposed,
includingreconfigurable logic,clockless CPUs,computational RAM, andoptical computing.
Benchmarkingis a way of testing CPU speed. Examples include SPECint andSPECfp, developed byStandard Performance Evaluation Corporation, and ConsumerMark developed by the Embedded Microprocessor Benchmark ConsortiumEEMBC.
Some of the commonly used metrics include:
There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa.
There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets.
As of 2010[update], in the general-purpose computing market, that is, desktop, laptop, and server computers commonly used in businesses and homes, the IntelIA-32and the 64-bit versionx86-64architecture dominate the market, with its rivalsPowerPCandSPARCmaintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops.[5]
Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption.
In 1984, most high-performance CPUs required four to five years to develop.[6]
Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs.
As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors.
These single-function devices differ from the more familiar general-purpose CPUs in several ways:
The embedded CPU family with the largest number of total units shipped is the8051, averaging nearly a billion units per year.[7]The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.4730 square millimeters of silicon.[8][9]
As of 2009, more CPUs are produced using theARM architecture familyinstruction sets than any other 32-bit instruction set.[10][11]The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time.[12]
The 32-bitParallax Propellermicrocontroller architecture and the first chip were designed by two people in about 10 human years of work time.[13]
The 8-bitAVR architectureand first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology.
The 8-bit 6502 architecture and the firstMOS Technology 6502chip were designed in 13 months by a group of about 9 people.[14]
The 32-bitBerkeley RISCI and RISC II processors were mostly designed by a series of students as part of a four quarter sequence of graduate courses.[15]This design became the basis of the commercialSPARCprocessor design.
For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of7400 seriesintegrated circuits.
One team of 4 students designed and built a simple 32 bit CPU during that semester.[16]
Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester.[17]
The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time.[18]24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU.[19]
For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented bylogic synthesistechniques. These synthesized processors can be implemented in a much shorter amount of time, giving quickertime-to-market.
|
https://en.wikipedia.org/wiki/Processor_design
|
Anobject-oriented operating system[1]is anoperating systemthat is designed, structured, and operated usingobject-oriented programmingprinciples.
An object-oriented operating system is in contrast to an object-orienteduser interfaceor programmingframework, which can be run on a non-object-oriented operating system likeDOSorUnix.
There are alreadyobject-based languageconcepts involved in the design of a more typical operating system such asUnix. While a more traditional language likeCdoes not support object-orientation as fluidly as more recent languages, the notion of, for example, afile,stream, ordevice driver(in Unix, each represented as afile descriptor) can be considered a good example of objects. They are, after all,abstract data types, with variousmethodsin the form ofsystem callswhich behavior varies based on the type of object and which implementation details are hidden from the caller.
Object-orientationhas been defined asobjects+inheritance, and inheritance is only one approach to the more general problem ofdelegationthat occurs in every operating system.[2]Object-orientation has been more widely used in theuser interfacesof operating systems than in theirkernels.
An object is an instance of a class, which provides a certain set of functionalities. Two objects can be differentiated based on the functionalities (or methods) they support. In an operating system context, objects are associated with a resource. Historically, the object-oriented design principles were used in operating systems to provide several protection mechanisms.[1]
Protection mechanisms in an operating system help in providing a clear separation between different user programs. It also protects the operating system from any malicious user program behavior. For example, consider the case of user profiles in an operating system. The user should not have access to resources of another user. The object model deals with these protection issues with each resource acting as an object. Every object can perform only a set of operations. In the context of user profiles, the set of operations is limited byprivilege levelof a user.[1]
Present-day operating systems use object-oriented design principles for many components of the system, which includes protection.
|
https://en.wikipedia.org/wiki/Object-oriented_operating_system
|
Inmathematics, aregular semigroupis asemigroupSin which every element isregular, i.e., for each elementainSthere exists an elementxinSsuch thataxa=a.[1]Regular semigroups are one of the most-studied classes of semigroups, and their structure is particularly amenable to study viaGreen's relations.[2]
Regular semigroups were introduced byJ. A. Greenin his influential 1951 paper "On the structure of semigroups"; this was also the paper in whichGreen's relationswere introduced. The concept ofregularityin a semigroup was adapted from an analogous condition forrings, already considered byJohn von Neumann.[3]It was Green's study of regular semigroups which led him to define his celebratedrelations. According to a footnote in Green 1951, the suggestion that the notion of regularity be applied tosemigroupswas first made byDavid Rees.
The terminversive semigroup(French: demi-groupe inversif) was historically used as synonym in the papers ofGabriel Thierrin(a student ofPaul Dubreil) in the 1950s,[4][5]and it is still used occasionally.[6]
There are two equivalent ways in which to define a regular semigroupS:
To see the equivalence of these definitions, first suppose thatSis defined by (2). Thenbserves as the requiredxin (1). Conversely, ifSis defined by (1), thenxaxis an inverse fora, sincea(xax)a=axa(xa) =axa=aand (xax)a(xax) =x(axa)(xax) =xa(xax) =x(axa)x=xax.[8]
The set of inverses (in the above sense) of an elementain an arbitrarysemigroupSis denoted byV(a).[9]Thus, another way of expressing definition (2) above is to say that in a regular semigroup,V(a) is nonempty, for everyainS. The product of any elementawith anybinV(a) is alwaysidempotent:abab=ab, sinceaba=a.[10]
A regular semigroup in which idempotents commute (with idempotents) is aninverse semigroup, or equivalently, every element has auniqueinverse. To see this, letSbe a regular semigroup in which idempotents commute. Then every element ofShas at least one inverse. Suppose thatainShas two inversesbandc, i.e.,
Then
So, by commuting the pairs of idempotentsab&acandba&ca, the inverse ofais shown to be unique. Conversely, it can be shown that anyinverse semigroupis a regular semigroup in which idempotents commute.[12]
The existence of a unique pseudoinverse implies the existence of a unique inverse, but the opposite is not true. For example, in thesymmetric inverse semigroup, the empty transformation Ø does not have a unique pseudoinverse, because Ø = ØfØ for any transformationf. The inverse of Ø is unique however, because only onefsatisfies the additional constraint thatf=fØf, namelyf= Ø. This remark holds more generally in any semigroup with zero. Furthermore, if every element has a unique pseudoinverse, then the semigroup is agroup, and the unique pseudoinverse of an element coincides with the group inverse.
Recall that theprincipal idealsof a semigroupSare defined in terms ofS1, thesemigroup with identity adjoined; this is to ensure that an elementabelongs to the principal right, left and two-sidedidealswhich it generates. In a regular semigroupS, however, an elementa=axaautomatically belongs to these ideals, without recourse to adjoining an identity.Green's relationscan therefore be redefined for regular semigroups as follows:
In a regular semigroupS, everyL{\displaystyle {\mathcal {L}}}- andR{\displaystyle {\mathcal {R}}}-class contains at least oneidempotent. Ifais any element ofSanda′is any inverse fora, thenaisL{\displaystyle {\mathcal {L}}}-related toa′aandR{\displaystyle {\mathcal {R}}}-related toaa′.[14]
Theorem.LetSbe a regular semigroup; letaandbbe elements ofS, and letV(x)denote the set of inverses ofxinS. Then
IfSis aninverse semigroup, then the idempotent in eachL{\displaystyle {\mathcal {L}}}- andR{\displaystyle {\mathcal {R}}}-class is unique.[12]
Some special classes of regular semigroups are:[16]
Theclassof generalised inverse semigroups is theintersectionof the class of locally inverse semigroups and the class of orthodox semigroups.[17]
All inverse semigroups are orthodox and locally inverse. The converse statements do not hold.
|
https://en.wikipedia.org/wiki/Regular_semigroup
|
Non-interactivezero-knowledge proofsarecryptographic primitives, where information between a prover and a verifier can be authenticated by the prover, without revealing any of the specific information beyond the validity of the statement itself. This makes direct communication between the prover and verifier unnecessary, effectively removing any intermediaries.
The key advantage of non-interactivezero-knowledge proofsis that they can be used in situations where there is no possibility of interaction between the prover and verifier, such as in online transactions where the two parties are not able to communicate in real time. This makes non-interactive zero-knowledge proofs particularly useful in decentralized systems likeblockchains, where transactions are verified by a network ofnodesand there is no central authority to oversee the verification process.[1]
Most non-interactive zero-knowledge proofs are based on mathematical constructs likeelliptic curve cryptographyorpairing-based cryptography, which allow for the creation of short and easily verifiable proofs of the truth of a statement. Unlike interactive zero-knowledge proofs, which require multiple rounds of interaction between the prover and verifier, non-interactive zero-knowledge proofs are designed to be efficient and can be used to verify a large number of statements simultaneously.[1]
Blum, Feldman, andMicali[2]showed in 1988 that a common reference string shared between the prover and the verifier is sufficient to achieve computational zero-knowledge without requiring interaction.Goldreichand Oren[3]gave impossibility results[clarification needed]for one shot zero-knowledge protocols in thestandard model. In 2003,Shafi GoldwasserandYael Tauman Kalaipublished an instance of an identification scheme for which any hash function will yield an insecure digital signature scheme.[4]
The model influences the properties that can be obtained from a zero-knowledge protocol. Pass[5]showed that in the common reference string model non-interactive zero-knowledge protocols do not preserve all of the properties of interactive zero-knowledge protocols; e.g., they do not preserve deniability. Non-interactive zero-knowledge proofs can also be obtained in therandom oracle modelusing theFiat–Shamir heuristic.[citation needed]
In 2012,Alessandro Chiesaet al developed the zk-SNARK protocol, an acronym forzero-knowledgesuccinct non-interactiveargument of knowledge.[6]The first widespread application of zk-SNARKs was in theZerocashblockchainprotocol, where zero-knowledge cryptography provides the computational backbone, by facilitating mathematical proofs that one party has possession of certain information without revealing what that information is.[7]Zcash utilized zk-SNARKs to facilitate four distinct transaction types: private, shielding, deshielding, and public. This protocol allowed users to determine how much data was shared with the public ledger for each transaction.[8]Ethereumzk-Rollups also utilize zk-SNARKs to increase scalability.[9]
In 2017,Bulletproofs[10]was released, which enable proving that a committed value is in a range using a logarithmic (in the bit length of the range) number of field and group elements.[11]Bulletproofs was later implemented intoMimblewimbleprotocol (the basis for Grin and Beam, andLitecoinvia extension blocks) andMonero cryptocurrency.[12]
In 2018, thezk-STARK(zero-knowledgeScalable TransparentArgument of Knowledge)[13]protocol was introduced by Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, and Michael Riabzev,[14]offering transparency (no trusted setup), quasi-linear proving time, and poly-logarithmic verification time.Zero-Knowledge Succinct Transparent Arguments of Knowledgeare a type of cryptographic proof system that enables one party (the prover) to prove to another party (the verifier) that a certain statement is true, without revealing any additional information beyond the truth of the statement itself. zk-STARKs are succinct, meaning that they allow for the creation of short proofs that are easy to verify, and they are transparent, meaning that anyone can verify the proof without needing any secret information.[14]
Unlike the first generation of zk-SNARKs, zk-STARKs, by default, do not require a trusted setup, which makes them particularly useful for decentralized applications like blockchains. Additionally, zk-STARKs can be used to verify many statements at once, making them scalable and efficient.[1]
In 2019, HALO recursive zk-SNARKs without a trusted setup were presented.[15]Pickles[16]zk-SNARKs, based on the former construction, power Mina, the first succinctly verifiable blockchain.[17]
A list of zero-knowledge proof protocols and libraries is provided below along with comparisons based on transparency, universality, and plausible post-quantum security. A transparent protocol is one that does not require any trusted setup and uses public randomness. A universal protocol is one that does not require a separate trusted setup for each circuit. Finally, a plausibly post-quantum protocol is one that is not susceptible to known attacks involving quantum algorithms.
Originally,[2]non-interactive zero-knowledge was only defined as a single theorem-proof system. In such a system each proof requires its own fresh common reference string. A common reference string in general is not a random string. It may, for instance, consist of randomly chosen group elements that all protocol parties use. Although the group elements are random, the reference string is not as it contains a certain structure (e.g., group elements) that is distinguishable from randomness. Subsequently, Feige, Lapidot, andShamir[37]introduced multi-theorem zero-knowledge proofs as a more versatile notion for non-interactive zero-knowledge proofs.
Pairing-based cryptographyhas led to several cryptographic advancements. One of these advancements is more powerful and more efficient non-interactive zero-knowledge proofs. The seminal idea was to hide the values for the pairing evaluation in acommitment. Using different commitment schemes, this idea was used to build zero-knowledge proof systems under thesub-group hiding[38]and under thedecisional linear assumption.[39]These proof systems provecircuit satisfiability, and thus by theCook–Levin theoremallow proving membership for every language in NP. The size of the common reference string and the proofs is relatively small; however, transforming a statement into a boolean circuit incurs considerable overhead.
Proof systems under thesub-group hiding,decisional linear assumption, andexternal Diffie–Hellman assumptionthat allow directly proving the pairing product equations that are common inpairing-based cryptographyhave been proposed.[40]
Under strongknowledge assumptions, it is known how to create sublinear-length computationally-sound proof systems forNP-completelanguages. More precisely, the proof in such proof systems consists only of a small number ofbilinear groupelements.[41][42]
|
https://en.wikipedia.org/wiki/Non-interactive_zero-knowledge_proof
|
Web scraping,web harvesting, orweb data extractionisdata scrapingused forextracting datafromwebsites.[1]Web scraping software may directly access theWorld Wide Webusing theHypertext Transfer Protocolor a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using abotorweb crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central localdatabaseorspreadsheet, for laterretrievaloranalysis.
Scraping a web page involves fetching it and then extracting data from it. Fetching is the downloading of a page (which a browser does when a user views a page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Having fetched, extraction can take place. The content of a page may beparsed, searched and reformatted, and its data copied into a spreadsheet or loaded into a database. Web scrapers typically take something out of a page, to make use of it for another purpose somewhere else. An example would be finding and copying names and telephone numbers, companies and their URLs, or e-mail addresses to a list (contact scraping).
As well ascontact scraping, web scraping is used as a component of applications used forweb indexing,web mininganddata mining, online price change monitoring andprice comparison, product review scraping (to watch the competition), gathering real estate listings, weather data monitoring,website change detection, research, tracking online presence and reputation,web mashup, andweb data integration.
Web pagesare built using text-based mark-up languages (HTMLandXHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for humanend-usersand not for ease of automated use. As a result, specialized tools and software have been developed to facilitate the scraping of web pages. Web scraping applications includemarket research, price comparison, content monitoring, and more. Businesses rely on web scraping services to efficiently gather and utilize this data.
Newer forms of web scraping involve monitoringdata feedsfrom web servers. For example,JSONis commonly used as a transport mechanism between the client and the web server.
There are methods that some websites use to prevent web scraping, such as detecting and disallowing bots from crawling (viewing) their pages. In response, web scraping systems use techniques involvingDOMparsing,computer visionandnatural language processingto simulate human browsing to enable gathering web page content for offline parsing.
After thebirth of the World Wide Webin 1989, the first web robot,[2]World Wide Web Wanderer, was created in June 1993, which was intended only to measure the size of the web.
In December 1993, the first crawler-based web search engine,JumpStation, was launched. As there were fewer websites available on the web, search engines at that time used to rely on human administrators to collect and format links. In comparison, Jump Station was the first WWW search engine to rely on a web robot.
In 2000, the first Web API and API crawler were created. AnAPI(Application Programming Interface) is an interface that makes it much easier to develop a program by providing the building blocks. In 2000,SalesforceandeBaylaunched their own API, with which programmers could access and download some of the data available to the public. Since then, many websites offer web APIs for people to access their public database.
Web scraping is the process of automatically mining data or collecting information from the World Wide Web. It is a field with active developments sharing a common goal with thesemantic webvision, an ambitious initiative that still requires breakthroughs in text processing, semantic understanding, artificial intelligence andhuman-computer interactions.
The simplest form of web scraping is manually copying and pasting data from a web page into a text file or spreadsheet. Sometimes even the best web-scraping technology cannot replace a human's manual examination and copy-and-paste, and sometimes this may be the only workable solution when the websites for scraping explicitly set up barriers to prevent machine automation.
A simple yet powerful approach to extract information from web pages can be based on the UNIXgrepcommand orregular expression-matching facilities of programming languages (for instancePerlorPython).
Staticanddynamic web pagescan be retrieved by posting HTTP requests to the remote web server usingsocket programming.
Many websites have large collections of pages generated dynamically from an underlying structured source like a database. Data of the same category are typically encoded into similar pages by a common script or template. In data mining, a program that detects such templates in a particular information source, extracts its content, and translates it into a relational form, is called awrapper. Wrapper generation algorithms assume that input pages of a wrapper induction system conform to a common template and that they can be easily identified in terms of a URL common scheme.[3]Moreover, somesemi-structured dataquery languages, such asXQueryand the HTQL, can be used to parse HTML pages and to retrieve and transform page content.
By using a program such asSeleniumorPlaywright, developers can control a web browser such asChromeorFirefoxwherein they can load, navigate, and retrieve data from websites. This method can be especially useful for scraping data from dynamic sites since a web browser will fully load each page. Once an entire page is loaded, you can access and parse theDOMusing an expression language such asXPath.
There are several companies that have developed vertical specific harvesting platforms. These platforms create and monitor a multitude of "bots" for specific verticals with no "man in the loop" (no direct human involvement), and no work related to a specific target site. The preparation involves establishing the knowledge base for the entire vertical and then the platform creates the bots automatically. The platform's robustness is measured by the quality of the information it retrieves (usually number of fields) and its scalability (how quick it can scale up to hundreds or thousands of sites). This scalability is mostly used to target theLong Tailof sites that common aggregators find complicated or too labor-intensive to harvest content from.
The pages being scraped may embracemetadataor semantic markups and annotations, which can be used to locate specific data snippets. If the annotations are embedded in the pages, asMicroformatdoes, this technique can be viewed as a special case of DOM parsing. In another case, the annotations, organized into a semantic layer,[4]are stored and managed separately from the web pages, so the scrapers can retrieve data schema and instructions from this layer before scraping the pages.
There are efforts usingmachine learningandcomputer visionthat attempt to identify and extract information from web pages by interpreting pages visually as a human being might.[5]
Uses advanced AI to interpret and process web page content contextually, extracting relevant information, transforming data, and customizing outputs based on the content's structure and meaning. This method enables more intelligent and flexible data extraction, accommodating complex and dynamic web content.
The legality of web scraping varies across the world. In general, web scraping may be against theterms of serviceof some websites, but the enforceability of these terms is unclear.[6]
In the United States, website owners can use three majorlegal claimsto prevent undesired web scraping: (1) copyright infringement (compilation), (2) violation of theComputer Fraud and Abuse Act("CFAA"), and (3)trespass to chattel.[7]However, the effectiveness of these claims relies upon meeting various criteria, and the case law is still evolving. For example, with regard to copyright, while outright duplication of original expression will in many cases be illegal, in the United States the courts ruled inFeist Publications v. Rural Telephone Servicethat duplication of facts is allowable.
U.S. courts have acknowledged that users of "scrapers" or "robots" may be held liable for committingtrespass to chattels,[8][9]which involves a computer system itself being considered personal property upon which the user of a scraper is trespassing. The best known of these cases,eBay v. Bidder's Edge, resulted in an injunction ordering Bidder's Edge to stop accessing, collecting, and indexing auctions from the eBay web site. This case involved automatic placing of bids, known asauction sniping. However, in order to succeed on a claim of trespass tochattels, theplaintiffmust demonstrate that thedefendantintentionally and without authorization interfered with the plaintiff's possessory interest in the computer system and that the defendant's unauthorized use caused damage to the plaintiff. Not all cases of web spidering brought before the courts have been considered trespass to chattels.[10]
One of the first major tests ofscreen scrapinginvolvedAmerican Airlines(AA), and a firm called FareChase.[11]AA successfully obtained aninjunctionfrom a Texas trial court, stopping FareChase from selling software that enables users to compare online fares if the software also searches AA's website. The airline argued that FareChase's websearch software trespassed on AA's servers when it collected the publicly available data. FareChase filed an appeal in March 2003. By June, FareChase and AA agreed to settle and the appeal was dropped.[12]
Southwest Airlineshas also challenged screen-scraping practices, and has involved both FareChase and another firm, Outtask, in a legal claim. Southwest Airlines charged that the screen-scraping is Illegal since it is an example of "Computer Fraud and Abuse" and has led to "Damage and Loss" and "Unauthorized Access" of Southwest's site. It also constitutes "Interference with Business Relations", "Trespass", and "Harmful Access by Computer". They also claimed that screen-scraping constitutes what is legally known as "Misappropriation and Unjust Enrichment", as well as being a breach of the web site's user agreement. Outtask denied all these claims, claiming that the prevailing law, in this case, should beUS Copyright lawand that under copyright, the pieces of information being scraped would not be subject to copyright protection. Although the cases were never resolved in theSupreme Court of the United States, FareChase was eventually shuttered by parent companyYahoo!, and Outtask was purchased by travel expense company Concur.[13]In 2012, a startup called 3Taps scraped classified housing ads from Craigslist. Craigslist sent 3Taps a cease-and-desist letter and blocked their IP addresses and later sued, inCraigslist v. 3Taps. The court held that the cease-and-desist letter and IP blocking was sufficient for Craigslist to properly claim that 3Taps had violated theComputer Fraud and Abuse Act(CFAA).
Although these are early scraping decisions, and the theories of liability are not uniform, it is difficult to ignore a pattern emerging that the courts are prepared to protect proprietary content on commercial sites from uses which are undesirable to the owners of such sites. However, the degree of protection for such content is not settled and will depend on the type of access made by the scraper, the amount of information accessed and copied, the degree to which the access adversely affects the site owner's system and the types and manner of prohibitions on such conduct.[14]
While the law in this area becomes more settled, entities contemplating using scraping programs to access a public web site should also consider whether such action is authorized by reviewing the terms of use and other terms or notices posted on or made available through the site. InCvent Inc.v.Eventbrite Inc.(2010), the United Statesdistrict court for the eastern district of Virginia, ruled that the terms of use should be brought to the users' attention in order for abrowsewrapcontract or license to be enforceable.[15]In a 2014 case, filed in theUnited States District Court for the Eastern District of Pennsylvania,[16]e-commerce siteQVCobjected to the Pinterest-like shopping aggregator Resultly's 'scraping of QVC's site for real-time pricing data. QVC alleges that Resultly "excessively crawled" QVC's retail site (allegedly sending 200-300 search requests to QVC's website per minute, sometimes to up to 36,000 requests per minute) which caused QVC's site to crash for two days, resulting in lost sales for QVC.[17]QVC's complaint alleges that the defendant disguised its web crawler to mask its source IP address and thus prevented QVC from quickly repairing the problem. This is a particularly interesting scraping case because QVC is seeking damages for the unavailability of their website, which QVC claims was caused by Resultly.
In the plaintiff's web site during the period of this trial, the terms of use link are displayed among all the links of the site, at the bottom of the page as most sites on the internet. This ruling contradicts the Irish ruling described below. The court also rejected the plaintiff's argument that the browse-wrap restrictions were enforceable in view of Virginia's adoption of the Uniform Computer Information Transactions Act (UCITA)—a uniform law that many believed was in favor on common browse-wrap contracting practices.[18]
InFacebook, Inc. v. Power Ventures, Inc., a district court ruled in 2012 that Power Ventures could not scrape Facebook pages on behalf of a Facebook user. The case is on appeal, and theElectronic Frontier Foundationfiled a brief in 2015 asking that it be overturned.[19][20]InAssociated Press v. Meltwater U.S. Holdings, Inc., a court in the US held Meltwater liable for scraping and republishing news information from the Associated Press, but a court in the United Kingdom held in favor of Meltwater.
TheNinth Circuitruled in 2019 that web scraping did not violate the CFAA inhiQ Labs v. LinkedIn. The case was appealed to theUnited States Supreme Court, which returned the case to the Ninth Circuit to reconsider the case in light of the 2021 Supreme Court decision inVan Buren v. United Stateswhich narrowed the applicability of the CFAA.[21]On this review, the Ninth Circuit upheld their prior decision.[22]
Internet Archivecollects and distributes a significant number of publicly available web pages without being considered to be in violation of copyright laws.[citation needed]
In February 2006, theDanish Maritime and Commercial Court(Copenhagen) ruled that systematic crawling, indexing, and deep linking by portal site ofir.dk of real estate site Home.dk does not conflict with Danish law or the database directive of the European Union.[23]
In a February 2010 case complicated by matters of jurisdiction, Ireland's High Court delivered a verdict that illustrates theinchoatestate of developing case law. In the case ofRyanair Ltd v Billigfluege.de GmbH, Ireland's High Court ruledRyanair's"click-wrap" agreement to be legally binding. In contrast to the findings of the United States District Court Eastern District of Virginia and those of the Danish Maritime and Commercial Court, JusticeMichael Hannaruled that the hyperlink to Ryanair's terms and conditions was plainly visible, and that placing the onus on the user to agree to terms and conditions in order to gain access to online services is sufficient to comprise a contractual relationship.[24]The decision is under appeal in Ireland's Supreme Court.[25]
On April 30, 2020, the French Data Protection Authority (CNIL) released new guidelines on web scraping.[26]The CNIL guidelines made it clear that publicly available data is still personal data and cannot be repurposed without the knowledge of the person to whom that data belongs.[27]
In Australia, theSpam Act 2003outlaws some forms of web harvesting, although this only applies to email addresses.[28][29]
Leaving a few cases dealing with IPR infringement, Indian courts have not expressly ruled on the legality of web scraping. However, since all common forms of electronic contracts are enforceable in India, violating the terms of use prohibiting data scraping will be a violation of the contract law. It will also violate theInformation Technology Act, 2000, which penalizes unauthorized access to a computer resource or extracting data from a computer resource.
The administrator of a website can use various measures to stop or slow a bot. Some techniques include:
|
https://en.wikipedia.org/wiki/Web_scraping
|
In probability theory and statistics, aMarkov chainorMarkov processis astochastic processdescribing asequenceof possible events in which theprobabilityof each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairsnow." Acountably infinitesequence, in which the chain moves state at discrete time steps, gives adiscrete-time Markov chain(DTMC). Acontinuous-timeprocess is called acontinuous-time Markov chain(CTMC). Markov processes are named in honor of theRussianmathematicianAndrey Markov.
Markov chains have many applications asstatistical modelsof real-world processes.[1]They provide the basis for general stochastic simulation methods known asMarkov chain Monte Carlo, which are used for simulating sampling from complexprobability distributions, and have found application in areas includingBayesian statistics,biology,chemistry,economics,finance,information theory,physics,signal processing, andspeech processing.[1][2][3]
The adjectivesMarkovianandMarkovare used to describe something that is related to a Markov process.[4]
A Markov process is astochastic processthat satisfies theMarkov property(sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.[5]In other words,conditionalon the present state of the system, its future and past states areindependent.
A Markov chain is a type of Markov process that has either a discretestate spaceor a discrete index set (often representing time), but the precise definition of a Markov chain varies.[6]For example, it is common to define a Markov chain as a Markov process in eitherdiscrete or continuous timewith a countable state space (thus regardless of the nature of time),[7][8][9][10]but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).[6]
The system'sstate spaceand time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time:
Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, adiscrete-time Markov chain (DTMC),[11]but a few authors use the term "Markov process" to refer to acontinuous-time Markov chain (CTMC)without explicit mention.[12][13][14]In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (seeMarkov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term.
While the time parameter is usually discrete, thestate spaceof a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space.[15]However, many applications of Markov chains employ finite orcountably infinitestate spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (seeVariations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise.
The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, atransition matrixdescribing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate.
A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are theintegersornatural numbers, and the random process is a mapping of these to states. The Markov property states that theconditional probability distributionfor the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps.
Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important.
Andrey Markovstudied Markov processes in the early 20th century, publishing his first paper on the topic in 1906.[16][17][18]Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of thePoisson process.[19][20][21]Markov was interested in studying an extension of independent random sequences, motivated by a disagreement withPavel Nekrasovwho claimed independence was necessary for theweak law of large numbersto hold.[22]In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption,[16][17][18]which had been commonly regarded as a requirement for such mathematical laws to hold.[18]Markov later used Markov chains to study the distribution of vowels inEugene Onegin, written byAlexander Pushkin, and proved acentral limit theoremfor such chains.[16]
In 1912Henri Poincaréstudied Markov chains onfinite groupswith an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced byPaulandTatyana Ehrenfestin 1907, and a branching process, introduced byFrancis GaltonandHenry William Watsonin 1873, preceding the work of Markov.[16][17]After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier byIrénée-Jules Bienaymé.[23]Starting in 1928,Maurice Fréchetbecame interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.[16][24]
Andrey Kolmogorovdeveloped in a 1931 paper a large part of the early theory of continuous-time Markov processes.[25][26]Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well asNorbert Wiener's work on Einstein's model of Brownian movement.[25][27]He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.[25][28]Independent of Kolmogorov's work,Sydney Chapmanderived in a 1928 paper an equation, now called theChapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.[29]The differential equations are now called the Kolmogorov equations[30]or the Kolmogorov–Chapman equations.[31]Other mathematicians who contributed significantly to the foundations of Markov processes includeWilliam Feller, starting in 1930s, and then laterEugene Dynkin, starting in the 1950s.[26]
Suppose that there is a coin purse containing five coins worth 25¢, five coins worth 10¢ and five coins worth 5¢, and one by one, coins are randomly drawn from the purse and are set on a table. IfXn{\displaystyle X_{n}}represents the total value of the coins set on the table afterndraws, withX0=0{\displaystyle X_{0}=0}, then the sequence{Xn:n∈N}{\displaystyle \{X_{n}:n\in \mathbb {N} \}}isnota Markov process.
To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. ThusX6=$0.50{\displaystyle X_{6}=\$0.50}. If we know not justX6{\displaystyle X_{6}}, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine thatX7≥$0.60{\displaystyle X_{7}\geq \$0.60}with probability 1. But if we do not know the earlier values, then based only on the valueX6{\displaystyle X_{6}}we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses aboutX7{\displaystyle X_{7}}are impacted by our knowledge of values prior toX6{\displaystyle X_{6}}.
However, it is possible to model this scenario as a Markov process. Instead of definingXn{\displaystyle X_{n}}to represent thetotal valueof the coins on the table, we could defineXn{\displaystyle X_{n}}to represent thecountof the various coin types on the table. For instance,X6=1,0,5{\displaystyle X_{6}=1,0,5}could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by6×6×6=216{\displaystyle 6\times 6\times 6=216}possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in stateX1=0,1,0{\displaystyle X_{1}=0,1,0}. The probability of achievingX2{\displaystyle X_{2}}now depends onX1{\displaystyle X_{1}}; for example, the stateX2=1,0,1{\displaystyle X_{2}=1,0,1}is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of theXn=i,j,k{\displaystyle X_{n}=i,j,k}state depends exclusively on the outcome of theXn−1=ℓ,m,p{\displaystyle X_{n-1}=\ell ,m,p}state.
A discrete-time Markov chain is a sequence ofrandom variablesX1,X2,X3, ... with theMarkov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states:
The possible values ofXiform acountable setScalled the state space of the chain.
A continuous-time Markov chain (Xt)t≥ 0is defined by a finite or countable state spaceS, atransition rate matrixQwith dimensions equal to that of the state space and initial probability distribution defined on the state space. Fori≠j, the elementsqijare non-negative and describe the rate of the process transitions from stateito statej. The elementsqiiare chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one.
There are three equivalent definitions of the process.[40]
LetXt{\displaystyle X_{t}}be the random variable describing the state of the process at timet, and assume the process is in a stateiat timet.
Then, knowingXt=i{\displaystyle X_{t}=i},Xt+h=j{\displaystyle X_{t+h}=j}is independent of previous values(Xs:s<t){\displaystyle \left(X_{s}:s<t\right)}, and ash→ 0 for alljand for allt,Pr(X(t+h)=j∣X(t)=i)=δij+qijh+o(h),{\displaystyle \Pr(X(t+h)=j\mid X(t)=i)=\delta _{ij}+q_{ij}h+o(h),}whereδij{\displaystyle \delta _{ij}}is theKronecker delta, using thelittle-o notation.
Theqij{\displaystyle q_{ij}}can be seen as measuring how quickly the transition fromitojhappens.
Define a discrete-time Markov chainYnto describe thenth jump of the process and variablesS1,S2,S3, ... to describe holding times in each of the states whereSifollows theexponential distributionwith rate parameter −qYiYi.
For any valuen= 0, 1, 2, 3, ... and times indexed up to this value ofn:t0,t1,t2, ... and all states recorded at these timesi0,i1,i2,i3, ... it holds that
wherepijis the solution of theforward equation(afirst-order differential equation)
with initial condition P(0) is theidentity matrix.
If the state space isfinite, the transition probability distribution can be represented by amatrix, called the transition matrix, with the (i,j)thelementofPequal to
Since each row ofPsums to one and all elements are non-negative,Pis aright stochastic matrix.
A stationary distributionπis a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrixPon it and so is defined by
By comparing this definition with that of aneigenvectorwe see that the two concepts are related and that
is a normalized (∑iπi=1{\textstyle \sum _{i}\pi _{i}=1}) multiple of a left eigenvectoreof the transition matrixPwith aneigenvalueof 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution.
The values of a stationary distributionπi{\displaystyle \textstyle \pi _{i}}are associated with the state space ofPand its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as∑i1⋅πi=1{\textstyle \sum _{i}1\cdot \pi _{i}=1}we see that thedot productof π with a vector whose components are all 1 is unity and that π lies on asimplex.
If the Markov chain is time-homogeneous, then the transition matrixPis the same after each step, so thek-step transition probability can be computed as thek-th power of the transition matrix,Pk.
If the Markov chain is irreducible and aperiodic, then there is a unique stationary distributionπ.[41]Additionally, in this casePkconverges to a rank-one matrix in which each row is the stationary distributionπ:
where1is the column vector with all entries equal to 1. This is stated by thePerron–Frobenius theorem. If, by whatever means,limk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below.
For some stochastic matricesP, the limitlimk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}does not exist while the stationary distribution does, as shown by this example:
(This example illustrates a periodic Markov chain.)
Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. LetPbe ann×nmatrix, and defineQ=limk→∞Pk.{\textstyle \mathbf {Q} =\lim _{k\to \infty }\mathbf {P} ^{k}.}
It is always true that
SubtractingQfrom both sides and factoring then yields
whereInis theidentity matrixof sizen, and0n,nis thezero matrixof sizen×n. Multiplying together stochastic matrices always yields another stochastic matrix, soQmust be astochastic matrix(see the definition above). It is sometimes sufficient to use the matrix equation above and the fact thatQis a stochastic matrix to solve forQ. Including the fact that the sum of each the rows inPis 1, there aren+1equations for determiningnunknowns, so it is computationally easier if on the one hand one selects one row inQand substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector0, and next left-multiplies this latter vector by the inverse of transformed former matrix to findQ.
Here is one method for doing so: first, define the functionf(A) to return the matrixAwith its right-most column replaced with all 1's. If [f(P−In)]−1exists then[42][41]
One thing to notice is that ifPhas an elementPi,ion its main diagonal that is equal to 1 and theith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powersPk. Hence, theith row or column ofQwill have the 1 and the 0's in the same positions as inP.
As stated earlier, from the equationπ=πP,{\displaystyle {\boldsymbol {\pi }}={\boldsymbol {\pi }}\mathbf {P} ,}(if exists) the stationary (or steady state) distributionπis a left eigenvector of rowstochastic matrixP. Then assuming thatPis diagonalizable or equivalently thatPhasnlinearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is,defective matrices, one may start with theJordan normal formofPand proceed with a bit more involved set of arguments in a similar way.[43])
LetUbe the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector ofPand letΣbe the diagonal matrix of left eigenvalues ofP, that is,Σ= diag(λ1,λ2,λ3,...,λn). Then byeigendecomposition
Let the eigenvalues be enumerated such that:
SincePis a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no otherπwhich solves the stationary distribution equation above). Letuibe thei-th column ofUmatrix, that is,uiis the left eigenvector ofPcorresponding to λi. Also letxbe a lengthnrow vector that represents a valid probability distribution; since the eigenvectorsuispanRn,{\displaystyle \mathbb {R} ^{n},}we can write
If we multiplyxwithPfrom right and continue this operation with the results, in the end we get the stationary distributionπ. In other words,π=a1u1←xPP...P=xPkask→ ∞. That means
Sinceπis parallel tou1(normalized by L2 norm) andπ(k)is a probability vector,π(k)approaches toa1u1=πask→ ∞ with a speed in the order ofλ2/λ1exponentially. This follows because|λ2|≥⋯≥|λn|,{\displaystyle |\lambda _{2}|\geq \cdots \geq |\lambda _{n}|,}henceλ2/λ1is the dominant term. The smaller the ratio is, the faster the convergence is.[44]Random noise in the state distributionπcan also speed up this convergence to the stationary distribution.[45]
Many results for Markov chains with finite state space can be generalized to chains with uncountable state space throughHarris chains.
The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space.
"Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form.
Seeinteracting particle systemandstochastic cellular automata(probabilistic cellular automata).
See for instanceInteraction of Markov Processes[46]or.[47]
Two states are said tocommunicatewith each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class isclosedif the probability of leaving the class is zero. A Markov chain isirreducibleif there is one communicating class, the state space.
A stateihas periodkifkis thegreatest common divisorof the number of transitions by whichican be reached, starting fromi. That is:
The state isperiodicifk>1{\displaystyle k>1}; otherwisek=1{\displaystyle k=1}and the state isaperiodic.
A stateiis said to betransientif, starting fromi, there is a non-zero probability that the chain will never return toi. It is calledrecurrent(orpersistent) otherwise.[48]For a recurrent statei, the meanhitting timeis defined as:
Stateiispositive recurrentifMi{\displaystyle M_{i}}is finite andnull recurrentotherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property.[49]
A stateiis calledabsorbingif there are no outgoing transitions from the state.
Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic.[50]
If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given byπi=1/E[Ti]{\displaystyle \pi _{i}=1/E[T_{i}]}.
A stateiis said to beergodicif it is aperiodic and positive recurrent. In other words, a stateiis ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time.
If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive.
It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a numberNsuch that any state can be reached from any other state in any number of steps less or equal to a numberN. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled withN= 1.
A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic.
Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones.[51]In fact, merely irreducible Markov chains correspond toergodic processes, defined according toergodic theory.[52]
Some authors call a matrixprimitiveif there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive.[53]Some authors call itregular.[54]
Theindex of primitivity, orexponent, of a regular matrix, is the smallestk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry ofM{\displaystyle M}is zero or positive, and therefore can be found on a directed graph withsign(M){\displaystyle \mathrm {sign} (M)}as its adjacency matrix.
There are several combinatorial results about the exponent when there are finitely many states. Letn{\displaystyle n}be the number of states, then[55]
If a Markov chain has a stationary distribution, then it can be converted to ameasure-preserving dynamical system: Let the probability space beΩ=ΣN{\displaystyle \Omega =\Sigma ^{\mathbb {N} }}, whereΣ{\displaystyle \Sigma }is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. LetT:Ω→Ω{\displaystyle T:\Omega \to \Omega }be the shift operator:T(X0,X1,…)=(X1,…){\displaystyle T(X_{0},X_{1},\dots )=(X_{1},\dots )}. Similarly we can construct such a dynamical system withΩ=ΣZ{\displaystyle \Omega =\Sigma ^{\mathbb {Z} }}instead.[57]
SinceirreducibleMarkov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains.
Inergodic theory, a measure-preserving dynamical system is calledergodicif any measurable subsetS{\displaystyle S}such thatT−1(S)=S{\displaystyle T^{-1}(S)=S}impliesS=∅{\displaystyle S=\emptyset }orΩ{\displaystyle \Omega }(up to a null set).
The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain isirreducibleif its corresponding measure-preserving dynamical system isergodic.[52]
In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, letXbe a non-Markovian process. Then define a processY, such that each state ofYrepresents a time-interval of states ofX. Mathematically, this takes the form:
IfYhas the Markov property, then it is a Markovian representation ofX.
An example of a non-Markovian process with a Markovian representation is anautoregressivetime seriesof order greater than one.[58]
Thehitting timeis the time, starting in a given set of states, until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition.
For a subset of statesA⊆S, the vectorkAof hitting times (where elementkiA{\displaystyle k_{i}^{A}}represents theexpected value, starting in stateithat the chain enters one of the states in the setA) is the minimal non-negative solution to[59]
For a CTMCXt, the time-reversed process is defined to beX^t=XT−t{\displaystyle {\hat {X}}_{t}=X_{T-t}}. ByKelly's lemmathis process has the same stationary distribution as the forward process.
A chain is said to bereversibleif the reversed process is the same as the forward process.Kolmogorov's criterionstates that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
One method of finding thestationary probability distribution,π, of anergodiccontinuous-time Markov chain,Q, is by first finding itsembedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as ajump process. Each element of the one-step transition probability matrix of the EMC,S, is denoted bysij, and represents theconditional probabilityof transitioning from stateiinto statej. These conditional probabilities may be found by
From this,Smay be written as
whereIis theidentity matrixand diag(Q) is thediagonal matrixformed by selecting themain diagonalfrom the matrixQand setting all other elements to zero.
To find the stationary probability distribution vector, we must next findφ{\displaystyle \varphi }such that
withφ{\displaystyle \varphi }being a row vector, such that all elements inφ{\displaystyle \varphi }are greater than 0 and‖φ‖1{\displaystyle \|\varphi \|_{1}}= 1. From this,πmay be found as
(Smay be periodic, even ifQis not. Onceπis found, it must be normalized to aunit vector.)
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observingX(t) at intervals of δ units of time. The random variablesX(0),X(δ),X(2δ), ... give the sequence of states visited by the δ-skeleton.
Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made:
ABernoulli schemeis a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as aBernoulli process.
Note, however, by theOrnstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme;[60]thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states thatanystationary stochastic processis isomorphic to a Bernoulli scheme; the Markov chain is just one such example.
When the Markov matrix is replaced by theadjacency matrixof afinite graph, the resulting shift is termed atopological Markov chainor asubshift of finite type.[60]A Markov matrix that is compatible with the adjacency matrix can then provide ameasureon the subshift. Many chaoticdynamical systemsare isomorphic to topological Markov chains; examples includediffeomorphismsofclosed manifolds, theProuhet–Thue–Morse system, theChacon system,sofic systems,context-free systemsandblock-coding systems.[60]
Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends,[61]wind power,[62]stochastic terrorism,[63][64]andsolar irradiance.[65]The Markov chain forecasting models utilize a variety of settings, from discretizing the time series,[62]to hidden Markov models combined with wavelets,[61]and the Markov chain mixture distribution model (MCM).[65]
Markovian systems appear extensively inthermodynamicsandstatistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.[66][67]For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.[67]
Markov chains are used inlattice QCDsimulations.[68]
A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain.[69]Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large numbernof molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time isntimes the probability a given molecule is in that state.
The classical model of enzyme activity,Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.[70]
An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicalsin silicotowards a desired class of compounds such as drugs or natural products.[71]As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.[72]
Also, the growth (and composition) ofcopolymersmay be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due tosteric effects, second-order Markov effects may also play a role in the growth of some polymer chains.
Similarly, it has been suggested that the crystallization and growth of some epitaxialsuperlatticeoxide materials can be accurately described by Markov chains.[73]
Markov chains are used in various areas of biology. Notable examples include:
Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.[citation needed]
Solar irradiancevariability assessments are useful forsolar powerapplications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains,[76][77][78][79]also including modeling the two states of clear and cloudiness as a two-state Markov chain.[80][81]
Hidden Markov modelshave been used inautomatic speech recognitionsystems.[82]
Markov chains are used throughout information processing.Claude Shannon's famous 1948 paperA Mathematical Theory of Communication, which in a single step created the field ofinformation theory, opens by introducing the concept ofentropyby modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters.[83]Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effectivedata compressionthroughentropy encodingtechniques such asarithmetic coding. They also allow effectivestate estimationandpattern recognition. Markov chains also play an important role inreinforcement learning.
Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use theViterbi algorithmfor error correction), speech recognition andbioinformatics(such as in rearrangements detection[84]).
TheLZMAlossless data compression algorithm combines Markov chains withLempel-Ziv compressionto achieve very high compression ratios.
Markov chains are the basis for the analytical treatment of queues (queueing theory).Agner Krarup Erlanginitiated the subject in 1917.[85]This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).[86]
Numerous queueing models use continuous-time Markov chains. For example, anM/M/1 queueis a CTMC on the non-negative integers where upward transitions fromitoi+ 1 occur at rateλaccording to aPoisson processand describe job arrivals, while transitions fromitoi– 1 (fori> 1) occur at rateμ(job service times are exponentially distributed) and describe completed services (departures) from the queue.
ThePageRankof a webpage as used byGoogleis defined by a Markov chain.[87][88][89]It is the probability to be at pagei{\displaystyle i}in the stationary distribution on the following Markov chain on all (known) webpages. IfN{\displaystyle N}is the number of known webpages, and a pagei{\displaystyle i}haski{\displaystyle k_{i}}links to it then it has transition probabilityαki+1−αN{\displaystyle {\frac {\alpha }{k_{i}}}+{\frac {1-\alpha }{N}}}for all pages that are linked to and1−αN{\displaystyle {\frac {1-\alpha }{N}}}for all pages that are not linked to. The parameterα{\displaystyle \alpha }is taken to be about 0.15.[90]
Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.[citation needed]
Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process calledMarkov chain Monte Carlo(MCMC). In recent years this has revolutionized the practicability ofBayesian inferencemethods, allowing a wide range ofposterior distributionsto be simulated and their parameters found numerically.[citation needed]
In 1971 aNaval Postgraduate SchoolMaster's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain andLanchester's laws.[91]
In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict."[92]
Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes.D. G. Champernownebuilt a Markov chain model of the distribution of income in 1953.[93]Herbert A. Simonand co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes.[94]Louis Bachelierwas the first to observe that stock prices followed a random walk.[95]The random walk was later seen as evidence in favor of theefficient-market hypothesisand random walk models were popular in the literature of the 1960s.[96]Regime-switching models of business cycles were popularized byJames D. Hamilton(1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions).[97]A more recent example is theMarkov switching multifractalmodel ofLaurent E. Calvetand Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models.[98][99]It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns.
Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in ageneral equilibriumsetting.[100]
Credit rating agenciesproduce annual tables of the transition probabilities for bonds of different credit ratings.[101]
Markov chains are generally used in describingpath-dependentarguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due toKarl Marx'sDas Kapital, tyingeconomic developmentto the rise ofcapitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of themiddle class, the ratio of urban to rural residence, the rate ofpoliticalmobilization, etc., will generate a higher probability of transitioning fromauthoritariantodemocratic regime.[102]
Markov chains are employed inalgorithmic music composition, particularly insoftwaresuch asCsound,Max, andSuperCollider. In a first-order chain, the states of the system become note or pitch values, and aprobability vectorfor each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could beMIDInote values, frequency (Hz), or any other desirable metric.[103]
A second-order Markov chain can be introduced by considering the current stateandalso the previous state, as indicated in the second table. Higher,nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense ofphrasalstructure, rather than the 'aimless wandering' produced by a first-order system.[104]
Markov chains can be used structurally, as in Xenakis's Analogique A and B.[105]Markov chains are also used in systems which use a Markov model to react interactively to music input.[106]
Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.[107]
Markov chains can be used to model many games of chance. The children's gamesSnakes and Laddersand "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).[citation needed]
Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.[108]He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such asbuntingandbase stealingand differences when playing on grass vs.AstroTurf.[109]
Markov processes can also be used togenerate superficially real-looking textgiven a sample document. Markov processes are used in a variety of recreational "parody generator" software (seedissociated press, Jeff Harrison,[110]Mark V. Shaney,[111][112]and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
|
https://en.wikipedia.org/wiki/Markov_chains
|
Fritz Erich Fellgiebel(4 October 1886 – 4 September 1944) was aGerman Armygeneral ofsignalsand a resistance fighter, participating in both the 1938 September Conspiracy to topple dictatorAdolf Hitlerand theNazi Party, and the 194420 July plotto assassinate the Fuhrer. In 1929, Fellgiebel became head of the cipher bureau (German:Chiffrierstelle) of theMinistry of the Reichswehr, which would later become theOKW/Chi. He was a signals specialist and was instrumental in introducing a common enciphering machine, theEnigma machine. However, he was unsuccessful in promoting a single cipher agency to coordinate all operations, as was demanded by OKW/Chi and was still blocked byJoachim von Ribbentrop,Heinrich HimmlerandHermann Göringuntil autumn 1943. It was not achieved until GeneralAlbert Prauntook over the post[1]following Fellgiebel's arrest and execution for his role in the 20 July attempted coup.
Fellgiebel was born inPöpelwitz[2](Present-day Popowice inWrocław,Poland) in the PrussianProvince of Silesia. At the age of 18, he joined a signals battalion in thePrussian Armyas an officer cadet. During theFirst World War, he served as a captain on the General Staff. After the war, he was assigned toBerlinas a General Staff officer of theReichswehr. His service had been exemplary, and in 1928 he was promoted to the rank ofmajor.
Fellgiebel was promotedlieutenant colonelin 1933 and became a fullcolonel(Oberst) the following year. By 1938, he was amajor general. That year, he was appointed Chief of the Army's Signal Establishment and Chief of theWehrmacht's communications liaison to the Supreme Command (OKW). Fellgiebel becameGeneral der Nachrichtentruppe(General of the Communications Troops) on 1 August 1940.
In 1942, Fellgiebel was promoted to Chief Signal Officer of Army High Command and of Supreme Command of Armed Forces (German:Chef des Heeresnachrichtenwesens), a position he held until 1944 when he was arrested and executed for his key role in the20 July plotto assassinate Hitler.[3]
Adolf Hitler did not fully trust Fellgiebel; Hitler considered him too independent-minded, but Hitler needed Fellgiebel's expertise. Fellgiebel was one of the first to understand that the German military should adopt and use theEnigmaencryptionmachine. As head of Hitler's signal services, Fellgiebel knew everymilitary secret, includingWernher von Braun's rocketry work at thePeenemünde Army Research Center.
Through his acquaintance with Colonel GeneralLudwig Beck, his superior, and then Beck's successor, Colonel-GeneralFranz Halder, Fellgiebel contacted the anti-Naziresistancegroup in theWehrmachtarmed forces. In the 1938 September Conspiracy to topple Hitler and the Nazi party on the eve of theMunich Agreement, he was supposed to cut communications throughout Germany while Field MarshalErwin von Witzlebenwould occupy Berlin.
He was a key source for theRed Orchestra. Fellgiebel released classified German military information toRudolf Roessler(codename "Lucy" of theLucy spy ring) aboutOperation Citadelwhich allowed Soviet forces to deploy effectively.[4]
Fellgiebel was involved in the preparations forOperation Valkyrieand during the attempt on theFührer's life on 20 July 1944[5]tried to cut Hitler'sheadquartersat theWolf's LairinEast Prussiaoff from all telecommunication connections. He only partly succeeded, as he could not prevent the informing ofJoseph Goebbelsin Berlin via separateSSlinks. When it became clear that the attempt had failed, Fellgiebel had to override the communications black-out he had set up. Fellgiebel's most famous act that day was his telephone report to his co-conspirator GeneralFritz Thieleat theBendlerblock, after he was informed that Hitler was still alive:"Etwas Furchtbares ist passiert! Der Führer lebt!"("Something awful has happened! TheFührerlives!").
Fellgiebel was arrested immediately at the Wolf's Lair and tortured for three weeks but did not reveal any names of his co-conspirators.[6]He was charged before theVolksgerichtshof("People's Court"). On 10 August 1944, he was found guilty byRoland Freislerand sentenced to death. He was executed on 4 September 1944 atPlötzensee Prisonin Berlin.
TheBundeswehr's barracks, Information Technology School of the Bundeswehr ("Schule Informationstechnik der Bundeswehr") inPöcking, is named theGeneral-Fellgiebel-Kasernein his honour.
|
https://en.wikipedia.org/wiki/Erich_Fellgiebel
|
Multi-SIMtechnology allows cloning up to 12GSMSIMcards (of formatCOMP128v1) into one card. The subscriber can leave the original cards in a secure place and use only the multi-SIM card in day to day life.
For telecom operator-provided cards, only the GroupMSISDNnumber is known to multi-SIM subscribers. Member-SIMMSISDNis transparent to the subscriber. Messages sent to member SIM are delivered to the Group MSISDN.
Multi-SIM allows switching among (up to) 12 stored numbers from the phone's main menu. A new menu entry in the subscriber’s phone automatically appears after inserting the multi-SIM card into the cell phone.
Only one of the member cards may be active at a time.
Modern SIM cards from many mobile operators are not compatible with multi-SIM technology and may not be cloned.[1]Multi-SIM technology is a result of poor security algorithms used in the encryption of the first generation of GSM SIM cards, commonly called COMP128v1. SIM cloning is now more difficult to perform, as more and more mobile operators are moving towards newer encryption methods, such as COMP128v2 or COMP128v3. SIM cloning is still possible in some countries such as Russia, Iran and China.
SIM cards issued before June 2002 most likely are COMP128v1 SIM cards, thus clonable.[2]
|
https://en.wikipedia.org/wiki/Multi-SIM_card
|
Artificial life(ALifeorA-Life) is a field of study wherein researchers examinesystemsrelated to naturallife, its processes, and its evolution, through the use ofsimulationswithcomputer models,robotics, andbiochemistry.[1]The discipline was named byChristopher Langton, an Americancomputer scientist, in 1986.[2]In 1987, Langton organized the first conference on the field, inLos Alamos, New Mexico.[3]There are three main kinds of alife,[4]named for their approaches:soft,[5]fromsoftware;hard,[6]fromhardware; andwet, from biochemistry. Artificial life researchers study traditionalbiologyby trying to recreate aspects of biological phenomena.[7][8]
Artificial life studies the fundamental processes ofliving systemsin artificial environments in order to gain a deeper understanding of the complex information processing that define such systems. These topics are broad, but often includeevolutionary dynamics,emergent propertiesof collective systems,biomimicry, as well as related issues about thephilosophy of the nature of lifeand the use of lifelike properties in artistic works.[citation needed]
The modeling philosophy of artificial life strongly differs from traditional modeling by studying not only "life as we know it" but also "life as it could be".[9]
A traditional model of a biological system will focus on capturing its most important parameters. In contrast, an alife modeling approach will generally seek to decipher the most simple and general principles underlying life and implement them in a simulation. The simulation then offers the possibility to analyse new and different lifelike systems.
Vladimir Georgievich Red'ko proposed to generalize this distinction to the modeling of any process, leading to the more general distinction of "processes as we know them" and "processes as they could be".[10]
At present, the commonly accepteddefinition of lifedoes not consider any current alife simulations orsoftwareto be alive, and they do not constitute part of the evolutionary process of anyecosystem. However, different opinions about artificial life's potential have arisen:
Program-based simulations contain organisms with a "genome" language. This language is more often in the form of aTuring completecomputer program than actual biological DNA. Assembly derivatives are the most common languages used. An organism "lives" when its code is executed, and there are usually various methods allowingself-replication. Mutations are generally implemented as random changes to the code. Use ofcellular automatais common but not required. Another example could be anartificial intelligenceandmulti-agent system/program.
Individual modules are added to a creature. These modules modify the creature's behaviors and characteristics either directly, by hard coding into the simulation (leg type A increases speed and metabolism), or indirectly, through the emergent interactions between a creature's modules (leg type A moves up and down with a frequency of X, which interacts with other legs to create motion). Generally, these are simulators that emphasize user creation and accessibility over mutation and evolution.
Organisms are generally constructed with pre-defined and fixed behaviors that are controlled by various parameters that mutate. That is, each organism contains a collection of numbers or otherfiniteparameters. Each parameter controls one or several aspects of an organism in a well-defined way.
These simulations have creatures that learn and grow using neural nets or a close derivative. Emphasis is often, although not always, on learning rather than on natural selection.
Mathematical models of complex systems are of three types:black-box(phenomenological),white-box(mechanistic, based on thefirst principles) andgrey-box(mixtures of phenomenological and mechanistic models).[12][13]In black-box models, the individual-based (mechanistic) mechanisms of a complex dynamic system remain hidden.
Black-box models are completely nonmechanistic. They are phenomenological and ignore a composition and internal structure of a complex system. Due to the non-transparent nature of the model, interactions of subsystems cannot be investigated. In contrast, a white-box model of a complex dynamic system has ‘transparent walls’ and directly shows underlying mechanisms. All events at the micro-, meso- and macro-levels of a dynamic system are directly visible at all stages of a white-box model's evolution. In most cases, mathematical modelers use the heavy black-box mathematical methods, which cannot produce mechanistic models of complex dynamic systems. Grey-box models are intermediate and combine black-box and white-box approaches.
Creation of a white-box model of complex system is associated with the problem of the necessity of an a priori basic knowledge of the modeling subject. The deterministic logicalcellular automataare necessary but not sufficient condition of a white-box model. The second necessary prerequisite of a white-box model is the presence of the physicalontologyof the object under study. The white-box modeling represents an automatic hyper-logical inference from thefirst principlesbecause it is completely based on the deterministic logic and axiomatic theory of the subject. The purpose of the white-box modeling is to derive from the basic axioms a more detailed, more concrete mechanistic knowledge about the dynamics of the object under study. The necessity to formulate an intrinsicaxiomatic systemof the subject before creating its white-box model distinguishes the cellular automata models of white-box type from cellular automata models based on arbitrary logical rules. If cellular automata rules have not been formulated from the first principles of the subject, then such a model may have a weak relevance to the real problem.[13]
This is a list of artificial life anddigital organismsimulators:
Hardware-based artificial life mainly consist ofrobots, that is,automaticallyguidedmachinesable to do tasks on their own.
Biochemical-based life is studied in the field ofsynthetic biology. It involves research such as the creation ofsynthetic DNA. The term "wet" is an extension of the term "wetware". Efforts toward "wet" artificial life focus on engineering live minimal cells from living bacteriaMycoplasma laboratoriumand in building non-living biochemical cell-like systems from scratch.
In May 2019, researchers reported a new milestone in the creation of a newsynthetic(possiblyartificial) form ofviablelife, a variant of thebacteriaEscherichia coli, by reducing the natural number of 64codonsin the bacterialgenometo 59 codons instead, in order to encode 20amino acids.[18][19]
Artificial life has had a controversial history.John Maynard Smithcriticized certain artificial life work in 1994 as "fact-free science".[23]
|
https://en.wikipedia.org/wiki/Artificial_life
|
ANOVA gaugerepeatabilityandreproducibilityis ameasurement systems analysistechnique that uses ananalysis of variance(ANOVA)random effects modelto assess a measurement system.
The evaluation of a measurement system isnotlimited togaugebut to all types ofmeasuring instruments,test methods, and other measurement systems.
ANOVA Gage R&R measures the amount of variability induced in measurements by the measurement system itself, and compares it to the total variability observed to determine the viability of the measurement system. There are several factors affecting a measurement system, including:
There are two important aspects of a Gage R&R:
It is important to understand the difference betweenaccuracy and precisionto understand the purpose of Gage R&R. Gage R&R addresses only the precision of a measurement system. It is common to examine theP/T ratiowhich is the ratio of the precision of a measurement system to the (total) tolerance of the manufacturing process of which it is a part. If the P/T ratio is low, the impact on product quality of variation due to the measurement system is small. If the P/T ratio is larger, it means the measurement system is "eating up" a large fraction of the tolerance, in that the parts that do not have sufficient tolerance may be measured as acceptable by the measurement system. Generally, a P/T ratio less than 0.1 indicates that the measurement system can reliably determine whether any given part meets the tolerance specification.[2]A P/T ratio greater than 0.3 suggests that unacceptable parts will be measured as acceptable (or vice versa) by the measurement system, making the system inappropriate for the process for which it is being used.[2]
ANOVA Gage R&R is an important tool within theSix Sigmamethodology, and it is also a requirement for aproduction part approval process(PPAP) documentation package.[3]Examples of Gage R&R studies can be found in part 1 of Czitrom & Spagon.[4]
There is not a universal criterion of minimum sample requirements for the GRR matrix, it being a matter for the Quality Engineer to assess risks depending on how critical the measurement is and how costly they are. The "10×2×2" (ten parts, two operators, two repetitions) is an acceptable sampling for some studies, although it has very few degrees of freedom for the operator component. Several methods of determining thesample sizeand degree ofreplicationare used.
In one common crossed study, 10 parts might each be measured two times by two different operators. The ANOVA then allows the individual sources of variation in the measurement data to be identified; the part-to-part variation, the repeatability of the measurements, the variation due to different operators; and the variation due to part by operator interaction.
The calculation of variance components and standard deviations using ANOVA is equivalent to calculating variance and standard deviation for a single variable but it enables multiple sources of variation to be individually quantified which are simultaneously influencing a single data set. When calculating the variance for a data set the sum of the squared differences between each measurement and the mean is calculated and then divided by the degrees of freedom (n– 1). The sums of the squared differences are calculated for measurements of the same part, by the same operator, etc., as given by the below equations for the part (SSPart), the operator (SSOp), repeatability (SSRep) and total variation (SSTotal).
wherenOpis the number of operators,nRepis the number of replicate measurements of each part by each operator,nPart{\displaystyle n_{\text{Part}}}is the number of parts,x̄is the grand mean,x̄i..is the mean for each part,x̄·j·is the mean for each operator,xijk'is each observation andx̄ijis the mean for each factor level. When following the spreadsheet method of calculation thenterms are not explicitly required since each squared difference is automatically repeated across the rows for the number of measurements meeting each condition.
The sum of the squared differences for part by operator interaction (SSPart · Op) is the residual variation given by
|
https://en.wikipedia.org/wiki/ANOVA_gauge_R%26R
|
Astraight-line grammar(sometimes abbreviated as SLG) is aformal grammarthat generates exactly one string.[1]Consequently, it does not branch (every non-terminal has only one associated production rule) nor loop (if non-terminalAappears in a derivation ofB, thenBdoes not appear in a derivation ofA).[1]
Straight-line grammars are widely used in the development of algorithms that execute directly on compressed structures (without prior decompression).[2]: 212
SLGs are of interest in fields likeKolmogorov complexity,Lossless data compression,Structure discoveryandCompressed data structures.[clarification needed]
The problem of finding a context-free grammar (equivalently: an SLG) of minimal size that generates a given string is called thesmallest grammar problem.[citation needed]
Straight-line grammars (more precisely: straight-line context-free string grammars) can be generalized toStraight-line context-free tree grammars.
The latter can be used conveniently to compresstrees.[2]: 212
Acontext-free grammarGis an SLG if:
1. for every non-terminalN, there is at most one production rule that hasNas its left-hand side, and
2. thedirected graphG=<V,E>, defined byVbeing the set of non-terminals and (A,B) ∈EwheneverBappears at the right-hand side of a production rule forA, isacyclic.
A mathematical definition of the more general formalism of straight-line context-free tree grammars can be found in Lohrey et al.[2]: 215
An SLG inChomsky normal formis equivalent to astraight-line program.[citation needed]
|
https://en.wikipedia.org/wiki/Context-free_grammar_generation_algorithms
|
TheOpen Grid Forum(OGF) is a community of users, developers, and vendors for standardization ofgrid computing. It was formed in 2006 in a merger of theGlobal Grid Forumand the Enterprise Grid Alliance.
The OGF models its process on theInternet Engineering Task Force(IETF), and produces documents with many acronyms such asOGSA,OGSI, andJSDL.
The OGF has two principal functions plus an administrative function: being thestandards organizationforgrid computing, and building communities within the overall grid community (including extending it within both academia and industry). Each of these function areas is then divided into groups of three types:working groupswith a generally tightly defined role (usually producing a standard),research groupswith a looser role bringing together people to discuss developments within their field and generate use cases and spawn working groups, andcommunity groups(restricted to community functions).
Three meetings are organized per year, divided (approximately evenly after averaging over a number of years) between North America, Europe and East Asia. Many working groups organize face-to-face meetings in the interim.
The concept of aforumto bring together developers, practitioners, and users of distributed computing (known asgrid computingat the time) was discussed at a "Birds of a Feather" session in November 1998 at the SC98 supercomputing conference.[1]Based on response to the idea during this BOF,Ian Fosterand Bill Johnston convened the firstGrid Forummeeting atNASA Ames Research Centerin June 1999, drawing roughly 100 people, mostly from the US. A group of organizers nominatedCharlie Catlett(fromArgonne National Laboratoryand theUniversity of Chicago) to serve as the initial chair, confirmed via a plenary vote was held at the secondGrid Forummeeting in Chicago in October 1999.[2][3]With advice and assistance from theInternet Engineering Task Force(IETF), OGF established a process based on the IETF. OGF is managed by a steering group.
During 1998, groups similar to Grid Forum began to organize in Europe (calledeGrid) and Japan. Discussions among leaders of these groups resulted in combining to form theGlobal Grid Forumwhich met for the first time inAmsterdamin March 2001.GGF-1in Amsterdam followed fiveGrid Forummeetings. Catlett served as GGF Chair for two 3-year terms and was succeeded by Mark Linesch (fromHewlett-Packard) in September 2004.
The Enterprise Grid Alliance (EGA), formed in 2004, was more focused on largedata centerbusinesses such asEMC Corporation,NetApp, andOracle Corporation.[4][5]AtGGF-18(the 23rd gathering of the forum, counting the first five GF meetings) in September 2006, GGF becameOpen Grid Forum (OGF)based on a merger with EGA.[6]In September 2007, Craig Lee of theAerospace Corporationbecame chair.[7]
Some technologies specified by OGF include:
In addition to technical standards, the OGF published community-developed informational and experimental documents.
The first version of theDRMAAAPI was implemented inSun's Grid engineand also in theUniversity of Wisconsin-Madison's programCondor cycle scavenger. The separate Globus Alliance maintains an implementation of some of these standards through the Globus Toolkit. A release ofUNICOREis based on the OGSA architecture and JSDL.
|
https://en.wikipedia.org/wiki/Open_Grid_Forum
|
Current notablecomputer hardwaremanufacturers:
List ofcomputer casemanufacturers:
Topmotherboardmanufacturers:
List of motherboard manufacturers:
Defunct:
Note: most of these companies only make designs, and do not manufacture their own designs.
Top x86CPUmanufacturers:
List of CPU manufacturers (most of the companies sell ARM-based CPUs, assumed if nothing else stated):
List of currenthard disk drivemanufacturers:
Note: the HDDs internal to these devices are manufactured only by theinternal HDD manufacturerslisted above.
List of external hard disk drive manufacturers:
Many companies manufacture SSDs but there are only a few major manufactures[4]ofNAND flash devicesthat are the storage element in most SSDs. The five major NAND flash manufacturers are:
List ofoptical disc drivemanufacturers:
List ofcomputer coolingsystem manufacturers:
List of non-refillable liquid cooling manufacturers:
List of refillable liquid cooling kits manufacturers:
List ofwater blockmanufacturers:
List ofgraphics cardcooling manufacturers:
List of companies that are actively manufacturing and sellingcomputer monitors:
List ofvideo cardmanufacturers:
List ofkeyboardmanufacturers:
List ofmousemanufacturers:
List ofJoystickmanufacturers:
List ofcomputer speakermanufacturers:
List ofmodemmanufacturers:
List ofnetwork cardmanufacturers:
There are a number of other companies (AMD, Microchip, Altera, etc.) making specialized chipsets as part of other ICs, and they are not often found in PC hardware (laptop, desktop or server). There are also a number of now defunct companies (like 3com, DEC, SGI) that produced network related chipsets for us in general computers.
List ofpower supply unit(PSU) designers:
Note that the actual memory chips are manufactured by a small number of DRAM manufacturers. List ofmemory modulemanufacturers:
List of currentDRAMmanufacturers:[5]
List of former or defunctDRAMmanufacturers:
List offablessDRAMcompanies:
In addition, other semiconductor manufacturers includeSRAMoreDRAMembedded in larger chips.
List ofheadphonemanufacturers:
List ofimage scannermanufacturers:
List ofsound cardmanufacturers:
List ofTV tuner cardmanufacturers:
List ofUSB flash drivemanufacturers:
List ofwebcammanufacturers:
|
https://en.wikipedia.org/wiki/List_of_computer_hardware_manufacturers
|
Data remanenceis the residual representation ofdigital datathat remains even after attempts have been made to remove or erase the data. This residue may result from data being left intact by a nominalfile deletionoperation, by reformatting of storage media that does not remove data previously written to the media, or through physical properties of thestorage mediathat allow previously written data to be recovered. Data remanence may make inadvertent disclosure ofsensitive informationpossible should the storage media be released into an uncontrolled environment (e.g., thrown in the bin (trash) or lost).
Various techniques have been developed to counter data remanence. These techniques are classified asclearing,purging/sanitizing, ordestruction. Specific methods includeoverwriting,degaussing,encryption, andmedia destruction.
Effective application of countermeasures can be complicated by several factors, including media that are inaccessible, media that cannot effectively be erased, advanced storage systems that maintain histories of data throughout the data's life cycle, and persistence of data in memory that is typically considered volatile.
Severalstandardsexist for the secure removal of data and the elimination of data remanence.
Manyoperating systems,file managers, and other software provide a facility where afileis not immediatelydeletedwhen the user requests that action. Instead, the file is moved to aholding area(i.e. the "trash"), making it easy for the user to undo a mistake. Similarly, many software products automatically create backup copies of files that are being edited, to allow the user to restore the original version, or to recover from a possible crash (autosavefeature).
Even when an explicit deleted file retention facility is not provided or when the user does not use it, operating systems do not actually remove the contents of a file when it is deleted unless they are aware that explicit erasure commands are required, like on asolid-state drive. (In such cases, the operating system will issue theSerial ATATRIMcommand or theSCSIUNMAP command to let the drive know to no longer maintain the deleted data.) Instead, they simply remove the file's entry from thefile systemdirectorybecause this requires less work and is therefore faster, and the contents of the file—the actual data—remain on thestorage medium. The data will remain there until theoperating systemreuses the space for new data. In some systems, enough filesystemmetadataare also left behind to enable easyundeletionby commonly availableutility software. Even when undelete has become impossible, the data, until it has been overwritten, can be read by software that readsdisk sectorsdirectly.Computer forensicsoften employs such software.
Likewise,reformatting,repartitioning, orreimaginga system is unlikely to write to every area of the disk, though all will cause the disk to appear empty or, in the case of reimaging, empty except for the files present in the image, to most software.
Finally, even when the storage media is overwritten, physical properties of the media may permit recovery of the previous contents. In most cases however, this recovery is not possible by just reading from the storage device in the usual way, but requires using laboratory techniques such as disassembling the device and directly accessing/reading from its components.[citation needed]
§ Complicationsbelow gives further explanations for causes of data remanence.
There are three levels commonly recognized for eliminating remnant data:
Clearingis the removal of sensitive data from storage devices in such a way that there is assurance that the data may not be reconstructed using normal system functions or software file/data recovery utilities. The data may still be recoverable, but not without special laboratory techniques.[1]
Clearing is typically an administrative protection against accidental disclosure within an organization. For example, before ahard driveis re-used within an organization, its contents may be cleared to prevent their accidental disclosure to the next user.
Purgingorsanitizingis the physical rewrite of sensitive data from a system or storage device done with the specific intent of rendering the data unrecoverable at a later time.[2]Purging, proportional to the sensitivity of the data, is generally done before releasing media beyond control, such as before discarding old media, or moving media to a computer with different security requirements.
The storage media is made unusable for conventional equipment. Effectiveness of destroying the media varies by medium and method. Depending on recording density of the media, and/or the destruction technique, this may leave data recoverable by laboratory methods. Conversely, destruction using appropriate techniques is the most secure method of preventing retrieval.
A common method used to counter data remanence is to overwrite the storage media with new data. This is often calledwipingorshreddinga disk or file, byanalogyto common methods ofdestroying print media, although the mechanism bears no similarity to these. Because such a method can often be implemented insoftwarealone, and may be able to selectively target only part of the media, it is a popular, low-cost option for some applications. Overwriting is generally an acceptable method of clearing, as long as the media is writable and not damaged.
The simplest overwrite technique writes the same data everywhere—often just a pattern of all zeros. At a minimum, this will prevent the data from being retrieved simply by reading from the media again using standard system functions. TheUEFIin modern machines may offer a ATA class disk erase function as well. TheATA-6standard governs secure erases specifications.
Bitlockeris whole disk encryption and illegible without the key. Writing a fresh GPT allows a new file system to be established. Blocks will set empty but LBA read is illegible. New data will be unaffected and work fine.
In an attempt to counter more advanced data recovery techniques, specific overwrite patterns and multiple passes have often been prescribed. These may be generic patterns intended to eradicate any trace signatures; an example is the seven-pass pattern0xF6,0x00,0xFF,<random byte>,0x00,0xFF,<random byte>, sometimes erroneously attributed to US standardDOD 5220.22-M.
One challenge with overwriting is that some areas of the disk may beinaccessible, due to media degradation or other errors. Software overwrite may also be problematic in high-security environments, which require stronger controls on data commingling than can be provided by the software in use. The use ofadvanced storage technologiesmay also make file-based overwrite ineffective (see the related discussion below under§ Complications).
There are specialized machines and software that are capable of doing overwriting. The software can sometimes be a standalone operating system specifically designed for data destruction. There are also machines specifically designed to wipe hard drives to the department of defense specifications DOD 5220.22-M.[3]
Writing zero to each block on hard disks and SSDs has the advantage of affording the firmware to deploy spare blocks when bad blocks are identified. Bitlocker has the advantage that data is illegible without the key. Seatools and other tools can erase disks with zero which is typical to revive old consumer class disks but they can wipe server disks albeit slowly. Modern 28TB and larger disks have an enormous number of LBA48 blocks. 40TB and 60TB disks will take proportionately longer times to wipe.
Peter Gutmanninvestigated data recovery from nominally overwritten media in the mid-1990s. He suggestedmagnetic force microscopymay be able to recover such data, and developed specific patterns, for specific drive technologies, designed to counter such.[4]These patterns have come to be known as theGutmann method. Gutmann's belief in the possibility of data recovery is based on many questionable assumptions and factual errors that indicate a low level of understanding of how hard drives work.[5]
Daniel Feenberg, an economist at the privateNational Bureau of Economic Research, claims that the chances of overwritten data being recovered from a modern hard drive amount to "urban legend".[6]He also points to the "18+1⁄2-minute gap"Rose Mary Woodscreated on a tape ofRichard Nixondiscussing theWatergate break-in. Erased information in the gap has not been recovered, and Feenberg claims doing so would be an easy task compared to recovery of a modern high density digital signal.
As of November 2007, theUnited States Department of Defenseconsiders overwriting acceptable for clearing magnetic media within the same security area/zone, but not as a sanitization method. Onlydegaussingorphysical destructionis acceptable for the latter.[7]
On the other hand, according to the 2014NISTSpecial Publication 800-88 Rev. 1 (p. 7): "For storage devices containingmagneticmedia, a single overwrite pass with a fixed pattern such as binary zeros typically hinders recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data."[8]An analysis by Wright et al. of recovery techniques, including magnetic force microscopy, also concludes that a single wipe is all that is required for modern drives. They point out that the long time required for multiple wipes "has created a situation where many organizations ignore the issue [altogether] – resulting in data leaks and loss."[9]
Degaussingis the removal or reduction of a magnetic field of a disk or drive, using a device called a degausser that has been designed for the media being erased. Applied tomagnetic media, degaussing may purge an entire media element quickly and effectively.
Degaussing often rendershard disksinoperable, as it erases low-levelformattingthat is only done at the factory during manufacturing. In some cases, it is possible to return the drive to a functional state by having it serviced at the manufacturer. However, some modern degaussers use such a strong magnetic pulse that the motor that spins the platters may be destroyed in the degaussing process, and servicing may not be cost-effective. Degaussed computer tape such asDLTcan generally be reformatted and reused with standard consumer hardware.
In some high-security environments, one may be required to use a degausser that has been approved for the task. For example, inUSgovernment and military jurisdictions, one may be required to use a degausser from theNSA's "Evaluated Products List".[10]
Encryptingdata before it is stored on the media may mitigate concerns about data remanence. If thedecryption keyis strong and carefully controlled, it may effectively make any data on the media unrecoverable. Even if the key is stored on the media, it may prove easier or quicker tooverwritejust the key, versus the entire disk. This process is calledcrypto-shredding.
Encryption may be done on afile-by-filebasis, or on thewhole disk.Cold boot attacksare one of the few possible methods for subverting afull-disk encryptionmethod, as there is no possibility of storing the plain text key in an unencrypted section of the medium. See the sectionComplications: Data in RAMfor further discussion.
Otherside-channel attacks(such askeyloggers, acquisition of a written note containing the decryption key, orrubber-hose cryptanalysis) may offer a greater chance of success, but do not rely on weaknesses in the cryptographic method employed. As such, their relevance for this article is minor.
Thorough destruction of the underlying storage media is the most certain way to counter data remanence. However, the process is generally time-consuming, cumbersome, and may require extremely thorough methods, as even a small fragment of the media may contain large amounts of data.
Specific destruction techniques include:
Storage media may have areas which become inaccessible by normal means. For example,magnetic disksmay develop newbad sectorsafter data has been written, and tapes require inter-record gaps. Modernhard disksoften feature reallocation of marginal sectors or tracks, automated in a way that theoperating systemwould not need to work with it. The problem is especially significant insolid-state drives(SSDs) that rely on relatively large relocated bad block tables. Attempts to counter data remanence byoverwritingmay not be successful in such situations, as data remnants may persist in such nominally inaccessible areas.
Data storage systems with more sophisticated features may makeoverwriteineffective, especially on a per-file basis. For example,journaling file systemsincrease the integrity of data by recording write operations in multiple locations, and applyingtransaction-like semantics; on such systems, data remnants may exist in locations "outside" the nominal file storage location. Some file systems also implementcopy-on-writeor built-inrevision control, with the intent that writing to a file never overwrites data in-place. Furthermore, technologies such asRAIDandanti-fragmentationtechniques may result in file data being written to multiple locations, either by design (forfault tolerance), or as data remnants.
Wear levelingcan also defeat data erasure, by relocating blocks between the time when they are originally written and the time when they are overwritten. For this reason, some security protocols tailored to operating systems or other software featuring automatic wear leveling recommend conducting a free-space wipe of a given drive and then copying many small, easily identifiable "junk" files or files containing other nonsensitive data to fill as much of that drive as possible, leaving only the amount of free space necessary for satisfactory operation of system hardware and software. As storage and system demands grow, the "junk data" files can be deleted as necessary to free up space; even if the deletion of "junk data" files is not secure, their initial nonsensitivity reduces to near zero the consequences of recovery of data remanent from them.[citation needed]
Asoptical mediaare not magnetic, they are not erased by conventionaldegaussing.Write-onceoptical media (CD-R,DVD-R, etc.) also cannot be purged by overwriting. Rewritable optical media, such asCD-RWandDVD-RW, may be receptive tooverwriting. Methods for successfully sanitizing optical discs includedelaminatingor abrading the metallic data layer, shredding, incinerating, destructive electrical arcing (as by exposure to microwave energy), and submersion in a polycarbonate solvent (e.g.,acetone).
Research from the Center for Magnetic Recording and Research, University of California, San Diego has uncovered problems inherent in erasing data stored onsolid-state drives(SSDs). Researchers discovered three problems with file storage on SSDs:[11]
First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs.[11]: 1
Solid-state drives, which are flash-based, differ from hard-disk drives in two ways: first, in the way data is stored; and second, in the way the algorithms are used to manage and access that data. These differences can be exploited to recover previously erased data. SSDs maintain a layer of indirection between the logical addresses used by computer systems to access data and the internal addresses that identify physical storage. This layer of indirection hides idiosyncratic media interfaces and enhances SSD performance, reliability, and lifespan (seewear leveling), but it can also produce copies of the data that are invisible to the user and that a sophisticated attacker could recover. For sanitizing entire disks, sanitize commands built into the SSD hardware have been found to be effective when implemented correctly, and software-only techniques for sanitizing entire disks have been found to work most, but not all, of the time.[11]: section 5In testing, none of the software techniques were effective for sanitizing individual files. These included well-known algorithms such as theGutmann method,US DoD 5220.22-M, RCMP TSSIT OPS-II, Schneier 7 Pass, and Secure Empty Trash on macOS (a feature included in versions OS X 10.3-10.9).[11]: section 5
TheTRIMfeature in many SSD devices, if properly implemented, will eventually erase data after it is deleted,[12][citation needed]but the process can take some time, typically several minutes. Many older operating systems do not support this feature, and not all combinations of drives and operating systems work.[13]
Data remanence has been observed instatic random-access memory(SRAM), which is typically considered volatile (i.e., the contents degrade with loss of external power). In one study,data retentionwas observed even at room temperature.[14]
Data remanence has also been observed indynamic random-access memory(DRAM). Modern DRAM chips have a built-in self-refresh module, as they not only require a power supply to retain data, but must also be periodically refreshed to prevent their data contents from fading away from the capacitors in their integrated circuits. A study found data remanence in DRAM with data retention of seconds to minutes at room temperature and "a full week without refresh when cooled with liquid nitrogen."[15]The study authors were able to use acold boot attackto recover cryptographickeysfor several popularfull disk encryptionsystems, including MicrosoftBitLocker, AppleFileVault,dm-cryptfor Linux, andTrueCrypt.[15]: 12
Despite some memory degradation, authors of the above described study were able to take advantage of redundancy in the way keys are stored after they have been expanded for efficient use, such as inkey scheduling. The authors recommend that computers be powered down, rather than be left in a "sleep" state, when not in physical control of the owner. In some cases, such as certain modes of the software program BitLocker, the authors recommend that a boot password or a key on a removable USB device be used.[15]: 12TRESORis akernelpatchfor Linux specifically intended to preventcold boot attackson RAM by ensuring that encryption keys are not accessible from user space and are stored in the CPU rather than system RAM whenever possible. Newer versions of the disk encryption softwareVeraCryptcan encrypt in-RAM keys and passwords on 64-bit Windows.[16]
|
https://en.wikipedia.org/wiki/Data_remanence
|
The termprocess modelis used in various contexts. For example, inbusiness process modelingthe enterprise process model is often referred to as thebusiness process model.
Process models areprocessesof the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development.[2]
The goals of a process model are to be:
From a theoretical point of view, themeta-process modelingexplains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers.[1]
The activity ofmodeling a businessprocess usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process.Change managementprogrammes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies includeUnified Modeling Language(UML),model-driven architecture, andservice-oriented architecture.
Process modeling addresses the process aspects of an enterprisebusiness architecture, leading to an all encompassingenterprise architecture. The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporatemergers and acquisitions; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger.
Process modeling has always been a key aspect ofbusiness process reengineering, and continuous improvement approaches seen inSix Sigma.
There are five types of coverage where the term process model has been defined differently:[3]
Processes can be of different kinds.[2]These definitions "correspond to the various ways in which a process can be modelled".
Granularityrefers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand.[2]
Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people.
While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver).[2][7]
It was found that while process models were prescriptive, in actual practice departures from the prescription can occur.[6]Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situationalmethod engineering.
Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'.[8]
Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs."[9]
As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two.
Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible.[10]This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques[10]In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before.
Quality properties that relate tobusiness process modelingtechniques discussed in[10]are:
To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques.
It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating.
There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed byKrogstie, quality measurement focus more on technical level instead of individual model level.[11]
Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendlinget al.who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question.[12]
The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models.
Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on.[13]
Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines.[14]
Hommes quoted Wanget al.(1994)[11]that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used.
Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity).
A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied.
Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL.[15][16]It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling.
The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out[17]According to previous research done by Moodyet al.[18]with use of conceptual model quality framework proposed by Lindlandet al.(1994) to evaluate quality of process model, three levels of quality[19]were identified:
From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done byKrogstie. This framework is called SEQUEL framework by Krogstieet al.1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects.
Dimensions of Conceptual Quality framework[20]Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain.
It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains.
Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model.
In later work, Krogstie et al.[15]stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain .
In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain.
Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters.
The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this.
Further work by Krogstieet al.(2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research.[15]
The other framework in use is Guidelines of Modeling (GoM)[21]based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems.
Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model.
Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases.
Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling.
Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary.
The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts.
The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use.
Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models);[22]Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models.[12][23]
The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility.
Further work by Mendling et al. investigates the connection between metrics and understanding[24]and[25]While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models.
Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice.
Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically.
From the research.[26]value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles.
From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility.[24][27]Based on these a set of guidelines was presented[28]7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows:
7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented.
It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out.
The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only.
This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline?[28]
|
https://en.wikipedia.org/wiki/Process_model
|
Curveletsare a non-adaptivetechnique for multi-scaleobjectrepresentation. Being an extension of thewaveletconcept, they are becoming popular in similar fields, namely inimage processingandscientific computing.
Wavelets generalize theFourier transformby using a basis that represents both location and spatial frequency. For 2D or 3D signals, directional wavelet transforms go further, by using basis functions that are also localized inorientation. A curvelet transform differs from other directional wavelet transforms in that the degree of localisation in orientation varies with scale. In particular, fine-scale basis functions are long ridges; the shape of the basis functions at scalejis2−j{\displaystyle 2^{-j}}by2−j/2{\displaystyle 2^{-j/2}}so the fine-scale bases are skinny ridges with a precisely determined orientation.
Curvelets are an appropriate basis for representing images (or other functions) which are smooth apart from singularities along smooth curves,where the curves have bounded curvature, i.e. where objects in the image have a minimum length scale. This property holds for cartoons, geometrical diagrams, and text. As one zooms in on such images, the edges they contain appear increasingly straight. Curvelets take advantage of this property, by defining the higher resolution curvelets to be more elongated than the lower resolution curvelets. However, natural images (photographs) do not have this property; they have detail at every scale. Therefore, for natural images, it is preferable to use some sort of directional wavelet transform whose wavelets have the same aspect ratio at every scale.
When the image is of the right type, curvelets provide a representation that is considerably sparser than other wavelet transforms. This can be quantified by considering the best approximation of a geometrical test image that can be represented using onlyn{\displaystyle n}wavelets, and analysing the approximation error as a function ofn{\displaystyle n}. For a Fourier transform, the squared error decreases only asO(1/n){\displaystyle O(1/{\sqrt {n}})}. For a wide variety of wavelet transforms, including both directional and non-directional variants, the squared error decreases asO(1/n){\displaystyle O(1/n)}. The extra assumption underlying the curvelet transform allows it to achieveO((logn)3/n2){\displaystyle O({(\log n)}^{3}/{n^{2}})}.
Efficient numerical algorithms exist for computing the curvelet transform of discrete data. The computational cost of the discrete curvelet transforms proposed by Candès et al. (Discrete curvelet transform based on unequally-spaced fast Fourier transforms and based on the wrapping of specially selected Fourier samples) is approximately 6–10 times that of an FFT, and has the same dependence ofO(n2logn){\displaystyle O(n^{2}\log n)}for an image of sizen×n{\displaystyle n\times n}.[1]
To construct a basic curveletϕ{\displaystyle \phi }and provide a tiling of the 2-D frequency space, two main ideas should be followed:
The number of wedges isNj=4⋅2⌈j2⌉{\displaystyle N_{j}=4\cdot 2^{\left\lceil {\frac {j}{2}}\right\rceil }}at the scale2−j{\displaystyle 2^{-j}}, i.e., it doubles in each second circular ring.
Letξ=(ξ1,ξ2)T{\displaystyle {\boldsymbol {\xi }}=\left(\xi _{1},\xi _{2}\right)^{T}}be the variable in frequency domain, andr=ξ12+ξ22,ω=arctanξ1ξ2{\displaystyle r={\sqrt {\xi _{1}^{2}+\xi _{2}^{2}}},\omega =\arctan {\frac {\xi _{1}}{\xi _{2}}}}be the polar coordinates in the frequency domain.
We use theansatzfor thedilated basic curveletsin polar coordinates:ϕ^j,0,0:=2−3j4W(2−jr)V~Nj(ω),r≥0,ω∈[0,2π),j∈N0{\displaystyle {\hat {\phi }}_{j,0,0}:=2^{\frac {-3j}{4}}W(2^{-j}r){\tilde {V}}_{N_{j}}(\omega ),r\geq 0,\omega \in [0,2\pi ),j\in N_{0}}
To construct a basic curvelet with compact support near a ″basic wedge″, the two windowsW{\displaystyle W}andV~Nj{\displaystyle {\tilde {V}}_{N_{j}}}need to have compact support.
Here, we can simply takeW(r){\displaystyle W(r)}to cover(0,∞){\displaystyle (0,\infty )}with dilated curvelets andV~Nj{\displaystyle {\tilde {V}}_{N_{j}}}such that each circular ring is covered by the translations ofV~Nj{\displaystyle {\tilde {V}}_{N_{j}}}.
Then the admissibility yields∑j=−∞∞|W(2−jr)|2=1,r∈(0,∞).{\displaystyle \sum _{j=-\infty }^{\infty }\left|W(2^{-j}r)\right|^{2}=1,r\in (0,\infty ).}seeWindow Functionsfor more informationFor tiling a circular ring intoN{\displaystyle N}wedges, whereN{\displaystyle N}is an arbitrary positive integer, we need a2π{\displaystyle 2\pi }-periodic nonnegative windowV~N{\displaystyle {\tilde {V}}_{N}}with support inside[−2πN,2πN]{\displaystyle \left[{\frac {-2\pi }{N}},{\frac {2\pi }{N}}\right]}such that∑l=0N−1V~N2(ω−2πlN)=1{\displaystyle \sum _{l=0}^{N-1}{\tilde {V}}_{N}^{2}\left(\omega -{\frac {2\pi l}{N}}\right)=1},for allω∈[0,2π){\displaystyle \omega \in \left[0,2\pi \right)},V~N{\displaystyle {\tilde {V}}_{N}}can be simply constructed as2π{\displaystyle 2\pi }-periodizations of a scaled windowV(Nω2π){\displaystyle V\left({\frac {N\omega }{2\pi }}\right)}.Then, it follows that∑l=0Nj−1|23j4ϕ^j,0,0(r,ω−2πlNj)|2=|W(2−jr)|2∑l=0Nj−1V~Nj2(ω−2πlN)=|W(2−jr)|2{\displaystyle \sum _{l=0}^{N_{j}-1}\left|2^{\frac {3j}{4}}{\hat {\phi }}_{j,0,0}\left(r,\omega -{\frac {2\pi l}{N_{j}}}\right)\right|^{2}=\left|W(2^{-j}r)\right|^{2}\sum _{l=0}^{N_{j}-1}{\tilde {V}}_{N_{j}}^{2}\left(\omega -{\frac {2\pi l}{N}}\right)=\left|W(2^{-j}r)\right|^{2}}
For a complete covering of the frequency plane including the region around zero, we need to define a low pass elementϕ^−1:=W0(|ξ|){\displaystyle {\hat {\phi }}_{-1}:=W_{0}(\left|\xi \right|)}withW02(r)2:=1−∑j=0∞W(2−jr)2{\displaystyle W_{0}^{2}(r)^{2}:=1-\sum _{j=0}^{\infty }W(2^{-j}r)^{2}}that is supported on the unit circle, and where we do not consider any rotation.
|
https://en.wikipedia.org/wiki/Curvelet
|
Incomputing,minifloatsarefloating-pointvalues represented with very fewbits. This reduced precision makes them ill-suited for general-purpose numerical calculations, but they are useful for special purposes such as:
Additionally, they are frequently encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures offloating-point arithmeticandIEEE 754numbers.
Depending on contextminifloatmay mean any size less than 32, any size less or equal to 16, or any size less than 16. The termmicrofloatmay mean any size less or equal to 8.[3]
This page uses the notation (S.E.M) to describe a mini float:
Minifloats can be designed following the principles of theIEEE 754standard. Almost all use the smallest exponent forsubnormal and normal numbers. Many use the largest exponent forinfinityandNaN, indicated by (special exponent)SE= 1. Some mini floats use this exponent value normally, in which caseSE= 0.
Theexponent biasB= 2E-1-SE. This value insures that all representable numbers have a representablereciprocal.
The notation can be converted to a(B,P,L,U)format as(2,M+ 1,SE- 2E-1+ 1, 2E-1- 1).
TheRadeon R300andR420GPUs used an "fp24" floating-point format (1.7.16).[4]"Full Precision" in Direct3D 9.0 is a proprietary 24-bit floating-point format. Microsoft's D3D9 (Shader Model 2.0) graphicsAPIinitially supported both FP24 (as in ATI's R300 chip) and FP32 (as in Nvidia's NV30 chip) as "Full Precision", as well as FP16 as "Partial Precision" for vertex and pixel shader calculations performed by the graphics hardware.
Minifloats are also commonly used in embedded devices such asmicrocontrollerswhere floating-point will need to be emulated in software. To speed up the computation, the mantissa typically occupies exactly half of the bits, so the register boundary automatically addresses the parts without shifting (ie (1.3.4) on 4-bit devices).[citation needed]
Thebfloat16(1.8.7) format is the first 16 bits of a single-precision number and was often used in image processing and machine learning before hardware support was added for other formats.
TheIEEE 754-2008 revisionhas 16-bit (1.5.10) floats called "half-precision" (opposed to 32-bitsingleand 64-bitdouble precision).
In 2016 Khronos defined 10-bit (0.5.5) and 11-bit (0.5.6)unsignedformats for use with Vulkan.[5][6]These can be converted from positive half-precision by truncating the sign and trailing digits.
In 2022 NVidia and others announced support for "fp8" format (1.5.2).[2]These can be converted from half-precision by truncating the trailing digits.
Since 2023, IEEE SA Working Group P3109 is working on a standard for 8-bit minifloats optimized for machine learning. The current draft defines not one format, but a family of 7 different formats, named "binary8pP", where "P" is a number from 1 to 7 and the bit pattern is (1.8-P.P-1). These also haveSE=0and use the largest value as Infinity and the pattern for negative zero as NaN.[7][2]
Also since 2023, 4-bit (1.2.1) floating point numbers — without the four special IEEE values — have found use in acceleratinglarge language models.[8][9]
A minifloat in 1 byte (8 bit) with 1 sign bit, 4 exponent bits and 3 significand bits (1.4.3) is demonstrated here. Theexponent biasis defined as 7 to center the values around 1 to match other IEEE 754 floats[10][11]so (for most values) the actual multiplier for exponentxis2x−7. All IEEE 754 principles should be valid.[12]This form is quite common for instruction.[citation needed]
Zero is represented as zero exponent with a zero mantissa. The zero exponent means zero is a subnormal number with a leading "0." prefix, and with the zero mantissa all bits after the decimal point are zero, meaning this value is interpreted as0.0002×2−6=0{\displaystyle 0.000_{2}\times 2^{-6}=0}. Floating point numbers use asigned zero, so−0{\displaystyle -0}is also available and is equal to positive0{\displaystyle 0}.
For the lowest exponent the significand is extended with "0." and the exponent value is treated as 1 higher like the least normalized number:
All other exponents the significand is extended with "1.":
Infinity values have the highest exponent, with the mantissa set to zero. The sign bit can be either positive or negative.
NaN values have the highest exponent, with the mantissa non-zero.
This is a chart of all possible values for this example 8-bit float:
There are only 242 different non-NaN values (if +0 and −0 are regarded as different), because 14 of the bit patterns represent NaNs.
At these small sizes other bias values may be interesting, for instance a bias of −2 will make the numbers 0–16 have the same bit representation as the integers 0–16, with the loss that no non-integer values can be represented.
Any bit allocation is possible. A format could choose to give more of the bits to the exponent if they need more dynamic range with less precision, or give more of the bits to the significand if they need more precision with less dynamic range. At the extreme, it is possible to allocate all bits to the exponent (1.7.0), or all but one of the bits to the significand (1.1.6), leaving the exponent with only one bit. The exponent must be given at least one bit, or else it no longer makes sense as a float, it just becomes asigned number.
Here is a chart of all possible values for (1.3.4).M≥ 2E-1ensures that the precision remains at least 0.5 throughout the entire range.[13]
Tables like the above can be generated for any combination of SEMB (sign, exponent, mantissa/significand, and bias) values using a scriptin Pythonorin GDScript.
With only 64 values, it is possible to plot all the values in a diagram, which can be instructive.
These graphics demonstrates math of two 6-bit (1.3.2)-minifloats, following the rules of IEEE 754 exactly. Green X's are NaN results, Cyan X's are +Infinity results, Magenta X's are -Infinity results. The range of the finite results is filled with curves joining equal values, blue for positive and red for negative.
The smallest possible float size that follows all IEEE principles, including normalized numbers, subnormal numbers, signed zero, signed infinity, and multiple NaN values, is a 4-bit float with 1-bit sign, 2-bit exponent, and 1-bit mantissa.[14]
If normalized numbers are not required, the size can be reduced to 3-bit by reducing the exponent down to 1.
In situations where the sign bit can be excluded, each of the above examples can be reduced by 1 bit further, keeping only the first row of the above tables. A 2-bit float with 1-bit exponent and 1-bit mantissa would only have 0, 1, Inf, NaN values.
Removing the mantissa would allow only two values: 0 and Inf. Removing the exponent does not work, the above formulae produce 0 and sqrt(2)/2. The exponent must be at least 1 bit or else it no longer makes sense as a float (it would just be asigned number).
|
https://en.wikipedia.org/wiki/Minifloat
|
Inmathematicsandtheoretical computer science, aKleene algebra(/ˈkleɪni/KLAY-nee; named afterStephen Cole Kleene) is asemiringthat generalizes the theory ofregular expressions: it consists of asetsupporting union (addition), concatenation (multiplication), andKleene staroperations subject to certain algebraic laws. The addition is required to be idempotent (x+x=x{\displaystyle x+x=x}for allx{\displaystyle x}), and induces apartial orderdefined byx≤y{\displaystyle x\leq y}ifx+y=y{\displaystyle x+y=y}. The Kleene star operation, denotedx∗{\displaystyle x*}, must satisfy the laws of theclosure operator.[1]
Kleene algebras have their origins in the theory of regular expressions andregular languagesintroduced by Kleene in 1951 and studied by others including V.N. Redko andJohn Horton Conway. The term was introduced byDexter Kozenin the 1980s, who fully characterized their algebraic properties and, in 1994, gave a finite axiomatization.
Kleene algebras have a number of extensions that have been studied, including Kleene algebras with tests (KAT) introduced by Kozen in 1997.[2]Kleene algebras and Kleene algebras with tests have applications informal verificationof computer programs.[3]They have also been applied to specify and verifycomputer networks.[4]
Various inequivalent definitions of Kleene algebras and related structures have been given in the literature.[5]Here we will give the definition that seems to be the most common nowadays.
A Kleene algebra is asetAtogether with twobinary operations+ :A×A→Aand · :A×A→Aand one function*:A→A, written asa+b,abanda*respectively, so that the following axioms are satisfied.
The above axioms define asemiring. We further require:
It is now possible to define apartial order≤ onAby settinga≤bif and only ifa+b=b(or equivalently:a≤bif and only if there exists anxinAsuch thata+x=b; with any definition,a≤b≤aimpliesa=b). With this order we can formulate the last four axioms about the operation*:
Intuitively, one should think ofa+bas the "union" or the "least upper bound" ofaandband ofabas some multiplication which ismonotonic, in the sense thata≤bimpliesax≤bx. The idea behind the star operator isa*= 1 +a+aa+aaa+ ... From the standpoint ofprogramming language theory, one may also interpret + as "choice", · as "sequencing" and*as "iteration".
Let Σ be a finite set (an "alphabet") and letAbe the set of allregular expressionsover Σ. We consider two such regular expressions equal if they describe the samelanguage. ThenAforms a Kleene algebra. In fact, this is afreeKleene algebra in the sense that any equation among regular expressions follows from the Kleene algebra axioms and is therefore valid in every Kleene algebra.
Again let Σ be an alphabet. LetAbe the set of allregular languagesover Σ (or the set of allcontext-free languagesover Σ; or the set of allrecursive languagesover Σ; or the set ofalllanguages over Σ). Then theunion(written as +) and theconcatenation(written as ·) of two elements ofAagain belong toA, and so does theKleene staroperation applied to any element ofA. We obtain a Kleene algebraAwith 0 being theempty setand 1 being the set that only contains theempty string.
LetMbe amonoidwith identity elementeand letAbe the set of allsubsetsofM. For two such subsetsSandT, letS+Tbe the union ofSandTand setST= {st:sinSandtinT}.S*is defined as the submonoid ofMgenerated byS, which can be described as {e} ∪S∪SS∪SSS∪ ... ThenAforms a Kleene algebra with 0 being the empty set and 1 being {e}. An analogous construction can be performed for any smallcategory.
Thelinear subspacesof a unitalalgebra over a fieldform a Kleene algebra. Given linear subspacesVandW, defineV+Wto be the sum of the two subspaces, and 0 to be the trivial subspace {0}. DefineV·W= span {v · w | v ∈ V, w ∈ W}, thelinear spanof the product of vectors fromVandWrespectively. Define1 = span {I}, the span of the unit of the algebra. The closure ofVis thedirect sumof all powers ofV.
V∗=⨁i=0∞Vi{\displaystyle V^{*}=\bigoplus _{i=0}^{\infty }V^{i}}
SupposeMis a set andAis the set of allbinary relationsonM. Taking + to be the union, · to be the composition and*to be thereflexive transitive closure, we obtain a Kleene algebra.
EveryBoolean algebrawith operations∨{\displaystyle \lor }and∧{\displaystyle \land }turns into a Kleene algebra if we use∨{\displaystyle \lor }for +,∧{\displaystyle \land }for · and seta*= 1 for alla.
A quite different Kleene algebra can be used to implement theFloyd–Warshall algorithm, computing theshortest path's lengthfor every two vertices of aweighted directed graph, byKleene's algorithm, computing a regular expression for every two states of adeterministic finite automaton.
Using theextended real number line, takea+bto be the minimum ofaandbandabto be the ordinary sum ofaandb(with the sum of +∞ and −∞ being defined as +∞).a*is defined to be the real number zero for nonnegativeaand −∞ for negativea. This is a Kleene algebra with zero element +∞ and one element the real number zero.
A weighted directed graph can then be considered as a deterministic finite automaton, with each transition labelled by its weight.
For any two graph nodes (automaton states), the regular expressions computed from Kleene's algorithm evaluates, in this particular Kleene algebra, to the shortest path length between the nodes.[7]
Zero is the smallest element: 0 ≤afor allainA.
The suma+bis theleast upper boundofaandb: we havea≤a+bandb≤a+band ifxis an element ofAwitha≤xandb≤x, thena+b≤x. Similarly,a1+ ... +anis the least upper bound of the elementsa1, ...,an.
Multiplication and addition are monotonic: ifa≤b, then
for allxinA.
Regarding the star operation, we have
IfAis a Kleene algebra andnis a natural number, then one can consider the set Mn(A) consisting of alln-by-nmatriceswith entries inA.
Using the ordinary notions of matrix addition and multiplication, one can define a unique*-operation so that Mn(A) becomes a Kleene algebra.
Kleene introduced regular expressions and gave some of their algebraic laws.[9][10]Although he didn't define Kleene algebras, he asked for a decision procedure for equivalence of regular expressions.[11]Redko proved that no finite set ofequationalaxioms can characterize the algebra of regular languages.[12]Salomaa gave complete axiomatizations of this algebra, however depending on problematic inference rules.[13]The problem of providing a complete set of axioms, which would allow derivation of all equations among regular expressions, was intensively studied byJohn Horton Conwayunder the name ofregular algebras,[14]however, the bulk of his treatment was infinitary.
In 1981,Kozengave a complete infinitary equational deductive system for the algebra of regular languages.[15]In 1994, he gave theabovefinite axiom system, which uses unconditional and conditional equalities (consideringa≤bas an abbreviation fora+b=b), and is equationally complete for the algebra of regular languages, that is, two regular expressionsaandbdenote the same language only ifa=bfollows from theaboveaxioms.[16]
Kleene algebras are a particular case ofclosed semirings, also calledquasi-regular semiringsorLehmann semirings, which are semirings in which every element has at least one quasi-inverse satisfying the equation:a* =aa* + 1 =a*a+ 1. This quasi-inverse is not necessarily unique.[17][18]In a Kleene algebra,a* is the least solution to thefixpointequations:X=aX+ 1 andX=Xa+ 1.[18]
Closed semirings and Kleene algebras appear inalgebraic path problems, a generalization of theshortest pathproblem.[18]
|
https://en.wikipedia.org/wiki/Kleene_algebra
|
TheFAO geopolitical ontologyis anontologydeveloped by theFood and Agriculture Organization of the United Nations (FAO)to describe, manage and exchange data related togeopoliticalentities such as countries, territories, regions and other similar areas.
Anontologyis a kind of dictionary that describes information in a certain domain using concepts and relationships. It is often implemented usingOWL(Web Ontology Language), anXML-based standard language that can be interpreted by computers.
The advantage of describing information in an ontology is that it enables to acquiredomain knowledgeby defining hierarchical structures of classes, adding individuals, setting object properties and datatype properties, and assigning restrictions.
The geopolitical ontology provides names in seven languages (Arabic, Chinese, French, English, Spanish, Russian and Italian) and identifiers in various international coding systems (ISO2,ISO3,AGROVOC,FAOSTAT, FAOTERM,[2]GAUL,UN,UNDPandDBPediaID codes) for territories and groups. Moreover, theFAOgeopolitical ontology tracks historical changes from 1985 up until today;[3]providesgeolocation(geographical coordinates); implements relationships amongcountriesand countries, or countries and groups, including properties such ashas border with,is predecessor of,is successor of,is administered by,has members, andis in group; and disseminates country statistics including country area, land area, agricultural area,GDPorpopulation.
The FAO geopolitical ontology provides a structured description of data sources. This includes: source name, source identifier, source creator and source's update date. Concepts are described using theDublin Corevocabulary[4]
In summary, the main objectives of the FAO geopolitical ontology are:
It is possible todownloadthe FAO geopolitical ontology in OWL[5]and RDF[6]formats. Documentation is available in theFAO Country ProfilesGeopolitical information web page.[7]
The geopolitical ontology contains :
TheFAOgeopolitical ontology is implemented inOWL. It consists of classes, properties, individuals and restrictions. Table 1 shows all classes, gives a brief description and lists some individuals that belong to each class. Note that the current version of the geopolitical ontology does not provide individuals of the class "disputed" territories. Table 2 and Table 3 illustrate datatype properties and object properties.
The FAO Geopolitical ontology is embracing the W3CLinked Open Data(LOD) initiative[14]and released itsRDFversion of the geopolitical ontology in March 2011.
The term 'Linked Open Data' refers to a set of best practices for publishing and connecting structured data on the Web. The key technologies that support Linked Data are URIs, HTTP and RDF.
The RDF version of the geopolitical ontology is compliant with all Linked data principles to be included in the Linked Open Data cloud, as explained in the following.[15][16]
Every resource in the OWL format of the FAO Geopolitical Ontology has a unique URI. Dereferenciation was implemented to allow for three different URIs to be assigned to each resource as follows:
In addition the current URIs used for OWL format needed to be kept to allow for backwards compatibility for other systems that are using them. Therefore, the new URIs for the FAO Geopolitical Ontology in LOD were carefully created, using “Cool URIs for Semantic Web” and considering other good practices for URIs, such as DBpedia URIs.
The URIs of the geopolitical ontology need to be permanent, consequently all transient information, such as year, version, or format was avoided in the definition of the URIs. The new URIs can be accessed[6]
For example, for the resource “Italy” the URIs are the following:
In addition, “owl:sameAs” is used to map the new URIs to the OWL representation.
When a non-information resource is looked up without any specific representation format, then the server needs to redirect the request to information resource with an HTML representation.
For example, to retrieve the resource “Italy”,[17]which is a non-information resource, the server redirects to the HTML page of “Italy”.[18]
The total number of triple statements in FAO Geopolitical Ontology is 22,495.
At least 50 links to a dataset already in the current LOD Cloud:
FAO Geopolitical Ontology has 195 links toDBpedia, which is already part of the LOD Cloud.
FAO Geopolitical Ontology provides the entire dataset as a RDF dump.[19]
The RDF version of the FAO Geopolitical Ontology has been already registered in CKAN[20]and it was requested to add it into the LOD Cloud.
TheFAO Country Profilesis an information retrieval tool which groups the FAO's vast archive of information on its global activities inagricultureandrural developmentin one single area and catalogues it exclusively by country.
TheFAO Country Profilessystem provides access to country-based heterogeneous data sources.[21]By using the geopolitical ontology in the system, the following benefits are expected:[22]
Figure 3 shows a page in theFAO Country Profileswhere the geopolitical ontology is described.
|
https://en.wikipedia.org/wiki/Geopolitical_ontology
|
Linux(/ˈlɪnʊks/,LIN-uuks)[15]is a family ofopen sourceUnix-likeoperating systemsbased on theLinux kernel,[16]anoperating system kernelfirst released on September 17, 1991, byLinus Torvalds.[17][18][19]Linux is typicallypackagedas aLinux distribution(distro), which includes the kernel and supportingsystem softwareandlibraries—most of which are provided by third parties—to create a complete operating system, designed as a clone ofUnixand released under thecopyleftGPLlicense.[20]
Thousands of Linux distributionsexist, many based directly or indirectly on other distributions;[21][22]popular Linux distributions[23][24][25]includeDebian,Fedora Linux,Linux Mint,Arch Linux, andUbuntu, while commercial distributions includeRed Hat Enterprise Linux,SUSE Linux Enterprise, andChromeOS. Linux distributions are frequently used in server platforms.[26][27]Many Linux distributions use the word "Linux" in their name, but theFree Software Foundationuses and recommends the name "GNU/Linux" to emphasize the use and importance ofGNUsoftware in many distributions, causing somecontroversy.[28][29]Other than the Linux kernel, key components that make up a distribution may include adisplay server (windowing system), apackage manager, a bootloader and aUnix shell.
Linux is one of the most prominent examples of free and open-sourcesoftwarecollaboration. While originally developed forx86basedpersonal computers, it has since beenportedto moreplatformsthan any other operating system,[30]and is used on a wide variety of devices including PCs,workstations,mainframesandembedded systems. Linux is the predominant operating system forserversand is also used on all of theworld's 500 fastest supercomputers.[g]When combined withAndroid, which is Linux-based and designed forsmartphones, they have thelargest installed baseof allgeneral-purpose operating systems.
The Linux kernel was designed byLinus Torvalds, following the lack of a workingkernelforGNU, aUnix-compatible operating system made entirely offree softwarethat had been undergoing development since 1983 byRichard Stallman. A working Unix system calledMinixwas later released but its license was not entirely free at the time[31]and it was made for an educative purpose. The first entirely free Unix for personal computers,386BSD, did not appear until 1992, by which time Torvalds had already built and publicly released the first version of theLinux kernelon theInternet.[32]Like GNU and 386BSD, Linux did not have any Unix code, being a fresh reimplementation, and therefore avoided thethen legal issues.[33]Linux distributions became popular in the 1990s and effectively made Unix technologies accessible to home users on personal computers whereas previously it had been confined to sophisticatedworkstations.[34]
Desktop Linux distributions include awindowing systemsuch asX11orWaylandand adesktop environmentsuch asGNOME,KDE PlasmaorXfce. Distributions intended forserversmay not have agraphical user interfaceat all or include asolution stacksuch asLAMP.
Thesource codeof Linux may be used, modified, and distributed commercially or non-commercially by anyone under the terms of its respective licenses, such as theGNU General Public License(GPL). The license means creating novel distributions is permitted by anyone[35]and is easier than it would be for an operating system such asMacOSorMicrosoft Windows.[36][37][38]The Linux kernel, for example, is licensed under the GPLv2, with an exception forsystem callsthat allows code that calls the kernel via system calls not to be licensed under the GPL.[39][40][35]
Because of the dominance of Linux-basedAndroidonsmartphones, Linux, including Android, has thelargest installed baseof allgeneral-purpose operating systemsas of May 2022[update].[41][42][43]Linux is, as of March 2024[update], used by around 4 percent ofdesktop computers.[44]TheChromebook, which runs the Linux kernel-basedChromeOS,[45][46]dominates the USK–12education market and represents nearly 20 percent of sub-$300notebooksales in the US.[47]Linux is the leading operating system on servers (over 96.4% of the top one million web servers' operating systems are Linux),[48]leads otherbig ironsystems such asmainframe computers,[clarification needed][49]and is used on all of theworld's 500 fastest supercomputers[h](as of November 2017[update], having gradually displaced all competitors).[50][51]
Linux also runs onembedded systems, i.e., devices whose operating system is typically built into thefirmwareand is highly tailored to the system. This includesrouters,automationcontrols,smart home devices,video game consoles,televisions(Samsung and LGsmart TVs),[52][53][54]automobiles(Tesla, Audi, Mercedes-Benz, Hyundai, and Toyota),[55]andspacecraft(Falcon 9rocket,Dragoncrew capsule, and theIngenuityMars helicopter).[56][57]
TheUnixoperating system was conceived of and implemented in 1969, atAT&T'sBell Labsin the United States, byKen Thompson,Dennis Ritchie,Douglas McIlroy, andJoe Ossanna.[58]First released in 1971, Unix was written entirely inassembly language, as was common practice at the time. In 1973, in a key pioneering approach, it was rewritten in theCprogramming language by Dennis Ritchie (except for some hardware and I/O routines). The availability of ahigh-level languageimplementation of Unix made itsportingto different computer platforms easier.[59]
As a 1956antitrust caseforbade AT&T from entering the computer business,[60]AT&T provided the operating system'ssource codeto anyone who asked. As a result, Unix use grew quickly and it became widely adopted byacademic institutionsand businesses. In 1984,AT&T divested itselfof itsregional operating companies, and was released from its obligation not to enter the computer business; freed of that obligation, Bell Labs began selling Unix as aproprietaryproduct, where users were not legally allowed to modify it.[61][62]
Onyx Systemsbegan selling early microcomputer-based Unix workstations in 1980. Later,Sun Microsystems, founded as a spin-off of a student project atStanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations did not use commodity PC hardware, for which Linux was later originally developed, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system.[63][64]
With Unix increasingly "locked in" as a proprietary product, theGNU Project, started in 1983 byRichard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely offree software. Work began in 1984.[65]Later, in 1985, Stallman started theFree Software Foundationand wrote theGNU General Public License(GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries,compilers,text editors, acommand-line shell, and awindowing system) were completed, although low-level elements such asdevice drivers,daemons, and thekernel, calledGNU Hurd, were stalled and incomplete.[66]
Minixwas created byAndrew S. Tanenbaum, acomputer scienceprofessor, and released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn operating system principles. Although thecomplete source code of Minix was freely available, the licensing terms prevented it from beingfree softwareuntil the licensing changed in April 2000.[67]
While attending theUniversity of Helsinkiin the fall of 1990, Torvalds enrolled in a Unix course.[68]The course used aMicroVAXminicomputer runningUltrix, and one of the required texts wasOperating Systems: Design and ImplementationbyAndrew S. Tanenbaum. This textbook included a copy of Tanenbaum'sMinixoperating system. It was with this course that Torvalds first became exposed to Unix. In 1991, he became curious about operating systems.[69]Frustrated by the licensing of Minix, which at the time limited it to educational use only,[67]he began to work on his operating system kernel, which eventually became the Linux kernel.
On July 3, 1991, to implement Unixsystem calls, Linus Torvalds attempted unsuccessfully to obtain a digital copy of thePOSIXstandardsdocumentationwith a request to thecomp.os.minixnewsgroup.[70]After not finding the POSIX documentation, Torvalds initially resorted to determining system calls fromSunOSdocumentation owned by the university for use in operating itsSun Microsystemsserver. He also learned some system calls from Tanenbaum's Minix text.
Torvalds began the development of the Linux kernel on Minix and applications written for Minix were also used on Linux. Later, Linux matured and further Linux kernel development took place on Linux systems.[71]GNU applications also replaced all Minix components, because it was advantageous to use the freely available code from the GNU Project with the fledgling operating system; code licensed under the GNU GPL can be reused in other computer programs as long as they also are released under the same or a compatible license. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL.[72]Developers worked to integrate GNU components with the Linux kernel, creating a fully functional and free operating system.[73]
Although not released until 1992, due tolegal complications, the development of386BSD, from whichNetBSD,OpenBSDandFreeBSDdescended, predated that of Linux. Linus Torvalds has stated that if theGNU kernelor 386BSD had been available in 1991, he probably would not have created Linux.[74][31]
Linus Torvalds had wanted to call his invention "Freax", aportmanteauof "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, some of the project'smakefilesincluded the name "Freax" for about half a year. Torvalds considered the name "Linux" but dismissed it as too egotistical.[75]
To facilitate development, the files were uploaded to theFTP serverofFUNETin September 1991. Ari Lemmke, Torvalds' coworker at theHelsinki University of Technology(HUT) who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name, so he named the project "Linux" on the server without consulting Torvalds.[75]Later, however, Torvalds consented to "Linux".
According to anewsgrouppost by Torvalds,[15]the word "Linux" should be pronounced (/ˈlɪnʊks/ⓘLIN-uuks) with a short 'i' as in 'print' and 'u' as in 'put'. To further demonstrate how the word "Linux" should be pronounced, he included an audio guide with the kernel source code.[76]However, in this recording, he pronounces Linux as/ˈlinʊks/(LEEN-uuks) with a short butclose front unrounded vowel, instead of anear-close near-front unrounded vowelas in his newsgroup post.
The adoption of Linux in production environments, rather than being used only by hobbyists, started to take off first in the mid-1990s in the supercomputing community, where organizations such asNASAstarted to replace their increasingly expensive machines withclustersof inexpensive commodity computers running Linux. Commercial use began whenDellandIBM, followed byHewlett-Packard, started offering Linux support to escapeMicrosoft's monopoly in the desktop operating system market.[77]
Today, Linux systems are used throughout computing, fromembedded systemsto virtually allsupercomputers,[51][78]and have secured a place in server installations such as the popularLAMPapplication stack. The use of Linux distributions in home and enterprise desktops has been growing.[79][80][81][82][83][84][85]
Linux distributions have also become popular in thenetbookmarket, with many devices shipping with customized Linux distributions installed, and Google releasing their ownChromeOSdesigned for netbooks.
Linux's greatest success in the consumer market is perhaps the mobile device market, with Android being the dominant operating system onsmartphonesand very popular ontabletsand, more recently, onwearables, and vehicles.Linux gamingis also on the rise withValveshowing its support for Linux and rolling outSteamOS, its own gaming-oriented Linux distribution, which was later implemented in theirSteam Deckplatform. Linux distributions have also gained popularity with various local and national governments, such as the federal government ofBrazil.[86]
Linus Torvalds is the lead maintainer for the Linux kernel and guides its development, whileGreg Kroah-Hartmanis the lead maintainer for the stable branch.[87]Zoë Kooymanis the executive director of the Free Software Foundation,[88]which in turn supports the GNU components.[89]Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries.
Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additionalpackage managementsoftware in the form of Linux distributions.
Many developers ofopen-sourcesoftware agree that the Linux kernel was not designed but ratherevolvedthroughnatural selection. Torvalds considers that although the design of Unix served as a scaffolding, "Linux grew with a lot of mutations – and because the mutations were less than random, they were faster and more directed thanalpha-particles in DNA."[90]Eric S. Raymondconsiders Linux's revolutionary aspects to be social, not technical: before Linux, complex software was designed carefully by small groups, but "Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers."[91]Bryan Cantrill, an engineer of a competing OS, agrees that "Linux wasn't designed, it evolved", but considers this to be a limitation, proposing that some features, especially those related to security,[92]cannot be evolved into, "this is not a biological system at the end of the day, it's a software system."[93]
A Linux-based system is a modular Unix-like operating system, deriving much of its basic design from principles established in Unix during the 1970s and 1980s. Such a system uses amonolithic kernel, the Linux kernel, which handles process control, networking, access to theperipherals, andfile systems.Device driversare either integrated directly with the kernel or added as modules that are loaded while the system is running.[94]
The GNUuserlandis a key part of most systems based on the Linux kernel, with Android being the notable exception. TheGNU C library, an implementation of theC standard library, works as a wrapper for the system calls of the Linux kernel necessary to the kernel-userspace interface, thetoolchainis a broad collection of programming tools vital to Linux development (including thecompilersused to build the Linux kernel itself), and thecoreutilsimplement many basicUnix tools. The GNU Project also developsBash, a popularCLIshell. Thegraphical user interface(or GUI) used by most Linux systems is built on top of an implementation of theX Window System.[95]More recently, some of the Linux community has sought to move to usingWaylandas the display server protocol, replacing X11.[96][97]
Many other open-source software projects contribute to Linux systems.
Installed components of a Linux system include the following:[95][99]
Theuser interface, also known as theshell, is either a command-line interface (CLI), a graphical user interface (GUI), or controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default user interface is usually graphical, although the CLI is commonly available throughterminal emulatorwindows or on a separatevirtual console.
CLI shells are text-based user interfaces, which use text for both input and output. The dominant shell used in Linux is theBourne-Again Shell(bash), originally developed for the GNU Project;other shellssuch asZshare also used.[100][101]Most low-level Linux components, including various parts of theuserland, use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks and provides very simpleinter-process communication.
On desktop systems, the most popular user interfaces are theGUI shells, packaged together with extensivedesktop environments, such asKDE Plasma,GNOME,MATE,Cinnamon,LXDE,Pantheon, andXfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply called "X" or "X11". It providesnetwork transparencyand permits a graphical application running on one system to be displayed on another where a user may interact with the application; however, certain extensions of the X Window System are not capable of working over the network.[102]Several X display servers exist, with the reference implementation,X.Org Server, being the most popular.
Several types ofwindow managersexist for X11, includingtiling,dynamic,stacking, andcompositing. Window managers provide means to control the placement and appearance of individual application windows, and interact with the X Window System. SimplerX window managerssuch asdwm,ratpoison, ori3wmprovide aminimalistfunctionality, while more elaborate window managers such asFVWM,Enlightenment, orWindow Makerprovide more features such as a built-intaskbarandthemes, but are still lightweight when compared to desktop environments. Desktop environments include window managers as part of their standard installations, such asMutter(GNOME),KWin(KDE), orXfwm(xfce), although users may choose to use a different window manager if preferred.
Wayland is a display server protocol intended as a replacement for the X11 protocol; as of 2022[update], it has received relatively wide adoption.[103]Unlike X11, Wayland does not need an external window manager and compositing manager. Therefore, a Wayland compositor takes the role of the display server, window manager, and compositing manager. Weston is the reference implementation of Wayland, while GNOME's Mutter and KDE's KWin are being ported to Wayland as standalone display servers. Enlightenment has already been successfully ported since version 19.[104]Additionally, many window managers have been made for Wayland, such as Sway or Hyprland, as well as other graphical utilities such as Waybar or Rofi.
Linux currently has two modern kernel-userspace APIs for handling video input devices:V4L2API for video streams and radio, andDVBAPI for digital TV reception.[105]
Due to the complexity and diversity of different devices, and due to the large number of formats and standards handled by those APIs, this infrastructure needs to evolve to better fit other devices. Also, a good userspace device library is the key to the success of having userspace applications to be able to work with all formats supported by those devices.[106][107]
The primary difference between Linux and many other popular contemporary operating systems is that the Linux kernel and other components are free and open-source software. Linux is not the only such operating system, although it is by far the most widely used.[108]Somefreeandopen-source software licensesare based on the principle ofcopyleft, a kind of reciprocity: any work derived from a copyleft piece of software must also be copyleft itself. The most common free software license, the GNU General Public License (GPL), is a form of copyleft and is used for the Linux kernel and many of the components from the GNU Project.[109]
Linux-based distributions are intended by developers forinteroperabilitywith other operating systems and established computing standards. Linux systems adhere to POSIX,[110]Single UNIX Specification(SUS),[111]Linux Standard Base(LSB),ISO, andANSIstandards where possible, although to date only one Linux distribution has been POSIX.1 certified, Linux-FT.[112][113]The Open Grouphas tested and certified at least two Linux distributions as qualifying for the Unix trademark,EulerOSandInspur K-UX.[114]
Free software projects, although developed throughcollaboration, are often produced independently of each other. The fact that the software licenses explicitly permit redistribution, however, provides a basis for larger-scale projects that collect the software produced by stand-alone projects and make it available all at once in the form of a Linux distribution.
Many Linux distributions manage a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such asapt,yum,zypper,pacmanorportageto install, remove, and update all of a system's software from one central location.[115]
A distribution is largely driven by its developer and user communities. Some vendors develop and fund their distributions on a volunteer basis,Debianbeing a well-known example. Others maintain a community version of their commercial distributions, asRed Hatdoes withFedora, andSUSEdoes withopenSUSE.[116][117]
In many cities and regions, local associations known asLinux User Groups(LUGs) seek to promote their preferred distribution and by extension free software. They hold meetings and provide free demonstrations, training, technical support, and operating system installation to new users. Many Internet communities also provide support to Linux users and developers. Most distributions and free software / open-source projects haveIRCchatrooms ornewsgroups.Online forumsare another means of support, with notable examples beingUnix & Linux Stack Exchange,[118][119]LinuxQuestions.organd the various distribution-specific support and community forums, such as ones forUbuntu, Fedora,Arch Linux,Gentoo, etc. Linux distributions hostmailing lists; commonly there will be a specific topic such as usage or development for a given list.
There are several technology websites with a Linux focus. Print magazines on Linux often bundlecover disksthat carry software or even complete Linux distributions.[120][121]
Although Linux distributions are generally available without charge, several large corporations sell, support, and contribute to the development of the components of the system and free software. An analysis of the Linux kernel in 2017 showed that well over 85% of the code was developed by programmers who are being paid for their work, leaving about 8.2% to unpaid developers and 4.1% unclassified.[122]Some of the major corporations that provide contributions includeIntel,Samsung,Google,AMD,Oracle, andFacebook.[122]Several corporations, notably Red Hat,Canonical, andSUSEhave built a significant business around Linux distributions.
Thefree software licenses, on which the various software packages of a distribution built on the Linux kernel are based, explicitly accommodate and encourage commercialization; the relationship between a Linux distribution as a whole and individual vendors may be seen assymbiotic. One commonbusiness modelof commercial suppliers is charging for support, especially for business users. A number of companies also offer a specialized business version of their distribution, which adds proprietary support packages and tools to administer higher numbers of installations or to simplify administrative tasks.[123]
Another business model is to give away the software to sell hardware. This used to be the norm in the computer industry, with operating systems such asCP/M,Apple DOS, and versions of theclassic Mac OSbefore 7.6 freely copyable (but not modifiable). As computer hardware standardized throughout the 1980s, it became more difficult for hardware manufacturers to profit from this tactic, as the OS would run on any manufacturer's computer that shared the same architecture.[124][125]
Mostprogramming languagessupport Linux either directly or through third-party community basedports.[126]The original development tools used for building both Linux applications and operating system programs are found within theGNU toolchain, which includes theGNU Compiler Collection(GCC) and theGNU Build System. Amongst others, GCC provides compilers forAda,C,C++,GoandFortran. Many programming languages have a cross-platform reference implementation that supports Linux, for examplePHP,Perl,Ruby,Python,Java,Go,RustandHaskell. First released in 2003, theLLVMproject provides an alternative cross-platform open-source compiler for many languages.Proprietarycompilers for Linux include theIntel C++ Compiler,Sun Studio, andIBM XL C/C++ Compiler.BASICis available inproceduralform fromQB64,PureBasic,Yabasic,GLBasic,Basic4GL,XBasic,wxBasic,SdlBasic, andBasic-256, as well asobject orientedthroughGambas,FreeBASIC, B4X,Basic for Qt, Phoenix Object Basic,NS Basic, ProvideX,Chipmunk Basic,RapidQandXojo.Pascalis implemented throughGNU Pascal,Free Pascal, andVirtual Pascal, as well as graphically viaLazarus,PascalABC.NET, orDelphiusingFireMonkey(previously throughBorland Kylix).[127][128]
A common feature of Unix-like systems, Linux includes traditional specific-purpose programming languages targeted atscripting, text processing and system configuration and management in general. Linux distributions supportshell scripts,awk,sedandmake. Many programs also have an embedded programming language to support configuring or programming themselves. For example,regular expressionsare supported in programs likegrepandlocate, the traditional Unix message transfer agentSendmailcontains its ownTuring completescripting system, and the advanced text editorGNU Emacsis built around a general purposeLispinterpreter.[129][130][131]
Most distributions also include support forPHP,Perl,Ruby,Pythonand otherdynamic languages. While not as common, Linux also supportsC#and otherCLIlanguages(viaMono),Vala, andScheme.Guile Schemeacts as anextension languagetargeting the GNU system utilities, seeking to make the conventionally small,static, compiled C programs ofUnix designrapidly and dynamically extensible via an elegant,functionalhigh-level scripting system; many GNU programs can be compiled with optional Guilebindingsto this end. A number ofJava virtual machinesand development kits run on Linux, including the original Sun Microsystems JVM (HotSpot), and IBM's J2SE RE, as well as many open-source projects likeKaffeandJikes RVM;Kotlin,Scala,Groovyand otherJVM languagesare also available.
GNOMEandKDEare popular desktop environments and provide a framework for developing applications. These projects are based on theGTKandQtwidget toolkits, respectively, which can also be used independently of the larger framework. Both support a wide variety of languages. There area numberofIntegrated development environmentsavailable includingAnjuta,Code::Blocks,CodeLite,Eclipse,Geany,ActiveState Komodo,KDevelop,Lazarus,MonoDevelop,NetBeans, andQt Creator, while the long-established editorsVim,nanoandEmacsremain popular.[132]
The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range ofcomputer architectures, includingARM-based Android smartphones and theIBM Zmainframes. Specialized distributions and kernel forks exist for less mainstream architectures; for example, theELKSkernelforkcan run onIntel 8086orIntel 8028616-bit microprocessors,[133]while theμClinuxkernel fork may run on systems without amemory management unit.[134]The kernel also runs on architectures that were only ever intended to use a proprietary manufacturer-created operating system, such asMacintoshcomputers[135][136](withPowerPC,Intel, andApple siliconprocessors),PDAs,video game consoles,portable music players, and mobile phones.
Linux has a reputation for supporting old hardware very well by maintaining standardized drivers for a long time.[137]There are several industry associations and hardwareconferencesdevoted to maintaining and improving support for diverse hardware under Linux, such asFreedomHEC. Over time, support for different hardware has improved in Linux, resulting in any off-the-shelf purchase having a "good chance" of being compatible.[138]
In 2014, a new initiative was launched to automatically collect a database of all tested hardware configurations.[139]
Many quantitative studies of free/open-source software focus on topics including market share and reliability, with numerous studies specifically examining Linux.[140]The Linux market is growing, and the Linux operating system market size is expected to see a growth of 19.2% by 2027, reaching $15.64 billion, compared to $3.89 billion in 2019.[141]Analysts project a Compound Annual Growth Rate (CAGR) of 13.7% between 2024 and 2032, culminating in a market size of US$34.90 billion by the latter year.[citation needed]Analysts and proponents attribute the relative success of Linux to its security, reliability, low cost, and freedom fromvendor lock-in.[142][143]
As of 2024, estimates suggest Linux accounts for at least 80% of the public cloud workload, partly thanks to its widespread use in platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.[150][151][152]
ZDNet report that 96.3% of the top one million web servers are running Linux.[153][154]W3Techs state that Linux powers at least 39.2% of websites whose operating system is known, with other estimates saying 55%.[155][156]
The Linux kernel islicensedunder the GNU General Public License (GPL), version 2. The GPL requires that anyone who distributes software based on source code under this license must make the originating source code (and any modifications) available to the recipient under the same terms.[172]Other key components of a typical Linux distribution are also mainly licensed under the GPL, but they may use other licenses; many libraries use theGNU Lesser General Public License(LGPL), a more permissive variant of the GPL, and theX.Orgimplementation of the X Window System uses theMIT License.
Torvalds states that the Linux kernel will not move from version 2 of the GPL to version 3.[173][174]He specifically dislikes some provisions in the new license which prohibit the use of the software indigital rights management.[175]It would also be impractical to obtain permission from all the copyright holders, who number in the thousands.[176]
A 2001 study ofRed Hat Linux7.1 found that this distribution contained 30 millionsource lines of code.[177]Using theConstructive Cost Model, the study estimated that this distribution required about eight thousand person-years of development time. According to the study, if all this software had been developed by conventional proprietary means, it would have cost aboutUS$1.82 billion[178]to develop in 2023 in the United States.[177]Most of the source code (71%) was written in the C programming language, but many other languages were used, includingC++,Lisp, assembly language, Perl, Python,Fortran, and variousshell scriptinglanguages. Slightly over half of all lines of code were licensed under the GPL. The Linux kernel itself was 2.4 million lines of code, or 8% of the total.[177]
In a later study, the same analysis was performed for Debian version 4.0 (etch, which was released in 2007).[179]This distribution contained close to 283 million source lines of code, and the study estimated that it would have required about seventy three thousand man-years and costUS$10.2 billion[178](in 2023 dollars) to develop by conventional means.
In the United States, the nameLinuxis a trademark registered to Linus Torvalds.[14]Initially, nobody registered it. However, on August 15, 1994, William R. Della Croce Jr. filed for the trademarkLinux, and then demanded royalties from Linux distributors. In 1996, Torvalds and some affected organizations sued him to have the trademark assigned to Torvalds, and, in 1997, the case was settled.[181]The licensing of the trademark has since been handled by theLinux Mark Institute(LMI). Torvalds has stated that he trademarked the name only to prevent someone else from using it. LMI originally charged a nominal sublicensing fee for use of the Linux name as part of trademarks,[182]but later changed this in favor of offering a free, perpetual worldwide sublicense.[183]
The Free Software Foundation (FSF) prefersGNU/Linuxas the name when referring to the operating system as a whole, because it considers Linux distributions to bevariantsof the GNU operating system initiated in 1983 byRichard Stallman, president of the FSF.[28][29]The foundation explicitly takes no issue over the name Android for the Android OS, which is also an operating system based on the Linux kernel, as GNU is not a part of it.
A minority of public figures and software projects other than Stallman and the FSF, notably distributions consisting of only free software, such as Debian (which had been sponsored by the FSF up to 1996),[184]also useGNU/Linuxwhen referring to the operating system as a whole.[185][186][187]Most media and common usage, however, refers to this family of operating systems simply asLinux, as do many large Linux distributions (for example,SUSE LinuxandRed Hat Enterprise Linux).
As of May 2011[update], about 8% to 13% of thelines of codeof the Linux distribution Ubuntu (version "Natty") is made of GNU components (the range depending on whether GNOME is considered part of GNU); meanwhile, 6% is taken by the Linux kernel, increased to 9% when including its direct dependencies.[188]
|
https://en.wikipedia.org/wiki/Linux
|
TheGlobal System for Mobile Communications(GSM) is a family of standards to describe the protocols for second-generation (2G) digitalcellular networks,[2]as used by mobile devices such asmobile phonesandmobile broadband modems. GSM is also atrade markowned by theGSM Association.[3]"GSM" may also refer to the voice codec initially used in GSM.[4]
2G networks developed as a replacement for first generation (1G) analog cellular networks. The original GSM standard, which was developed by theEuropean Telecommunications Standards Institute(ETSI), originally described a digital, circuit-switched network optimized forfull duplexvoicetelephony, employingtime division multiple access(TDMA) between stations. This expanded over time to includedata communications, first bycircuit-switched transport, then bypacketdata transport via its upgraded standards,GPRSand thenEDGE. GSM exists in various versions based on thefrequency bands used.
GSM was first implemented inFinlandin December 1991.[5]It became the global standard for mobile cellular communications, with over 2 billion GSM subscribers globally in 2006, far above its competing standard,CDMA.[6]Its share reached over 90% market share by the mid-2010s, and operating in over 219 countries and territories.[2]The specifications and maintenance of GSM passed over to the3GPPbody in 2000,[7]which at the time developed third-generation (3G)UMTSstandards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation5Gstandards, which do not form part of the GSM standard. Beginning in the late 2010s, various carriers worldwidestarted to shut down their GSM networks; nevertheless, as a result of the network's widespread use, the acronym "GSM" is still used as a generic term for the plethora ofGmobile phone technologies evolved from it or mobile phones itself.
In 1983, work began to develop a European standard for digital cellular voice telecommunications when theEuropean Conference of Postal and Telecommunications Administrations(CEPT) set up theGroupe Spécial Mobile(GSM) committee and later provided a permanent technical-support group based inParis. Five years later, in 1987, 15 representatives from 13 European countries signed amemorandum of understandinginCopenhagento develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard.[8]The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.[9][10][11][12]
In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers fromthe four big EU countries[clarification needed]cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSMMoUwas tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK),Philippe Dupuis(France), and Renzo Failli (Italy).[13]In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to theEuropean Telecommunications Standards Institute(ETSI).[10][11][12]The IEEE/RSE awarded toThomas HaugandPhilippe Dupuisthe 2018James Clerk Maxwell medalfor their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication".[14]The GSM (2G) has evolved into 3G, 4G and 5G.
In parallelFranceandGermanysigned a joint development agreement in 1984 and were joined byItalyand theUKin 1986. In 1986, theEuropean Commissionproposed reserving the 900 MHz spectrum band for GSM. It was long believed that the formerFinnishprime ministerHarri Holkerimade the world's first GSM call on 1 July 1991, callingKaarina Suonio(deputy mayor of the city ofTampere) using a network built byNokia and SiemensandoperatedbyRadiolinja.[15]In 2021 a former Nokia engineerPekka Lonkarevealed toHelsingin Sanomatmaking a test call just a couple of hours earlier. "World's first GSM call was actually made by me. I called Marjo Jousinen, in Salo.", Lonka informed.[16]The following year saw the sending of the firstshort messaging service(SMS or "text message") message, andVodafone UKand Telecom Finland signed the first internationalroamingagreement.
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called the DCS 1800. Also that year,Telstrabecame the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSMmobile phonebecame available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, theGSM Associationformed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.[11]
In 2000 the first commercialGeneral Packet Radio Service(GPRS) services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the firstMultimedia Messaging Service(MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational.Enhanced Data rates for GSM Evolution(EDGE) services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.[11]
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the firstHSDPA-capable network also became operational. The firstHSUPAnetwork launched in 2007. (High Speed Packet Access(HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.[11]
TheGSM Associationestimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.[17]
GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3GUniversal Mobile Telecommunications System(UMTS),code-division multiple access(CDMA) technology, nor the 4G LTEorthogonal frequency-division multiple access(OFDMA) technology standards issued by the 3GPP.[18]
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.[19]
TelstrainAustraliashut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network.[20]The second mobile provider to shut down its GSM network (on 1 January 2017) wasAT&T Mobilityfrom theUnited States.[21]OptusinAustraliacompleted the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network coveringWestern Australiaand theNorthern Territoryhad earlier in the year been shut down in April 2017.[22]Singaporeshut down 2G services entirely in April 2017.[23]
The network is structured into several discrete sections:
GSM utilizes acellular network, meaning thatcell phonesconnect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where thebase-stationantennais installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential orsmall-businessenvironments and connect to atelecommunications service provider's network via abroadband-internetconnection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height,antenna gain, andpropagationconditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell,[24]where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and thetiming advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or anindoor repeaterwith distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM networks operate in a number of differentcarrier frequencyranges (separated intoGSM frequency rangesfor 2G andUMTS frequency bandsfor 3G), with most2GGSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most3Gnetworks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, seeGSM frequency bands.
Regardless of the frequency selected by an operator, it is divided intotimeslotsfor individual phones. This allows eight full-rate or sixteen half-rate speech channels perradio frequency. These eight radio timeslots (orburstperiods) are grouped into aTDMAframe. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all8 channelsis270.833 kbit/s,and the frame duration is4.615 ms.[25]TDMA noise is interference that can be heard on speakers near a GSM phone using TDMA, audible as a buzzing sound.[26]
The transmission power in the handset is limited to a maximum of 2 watts inGSM 850/900and1 wattinGSM 1800/1900.
GSM has used a variety of voicecodecsto squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, calledHalf Rate(6.5 kbit/s) andFull Rate(13 kbit/s). These used a system based onlinear predictive coding(LPC). In addition to being efficient withbitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997[27]with theenhanced full rate(EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development ofUMTS, EFR was refactored into a variable-rate codec calledAMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
One of the key features of GSM is theSubscriber Identity Module, commonly known as aSIM card. The SIM is a detachablesmart card[3]containing a user's subscription information and phone book. This allows users to retain their information after switching handsets. Alternatively, users can change networks or network identities without switching handsets - simply by changing the SIM.
Sometimesmobile network operatorsrestrict handsets that they sell for exclusive use in their own network. This is calledSIM lockingand is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g.BrazilandGermany) all phones are sold unlocked due to the abundance of dual-SIM handsets and operators.[28]
GSM was intended to be a secure wireless system. It has considered the user authentication using apre-shared keyandchallenge–response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.[29]
Research findings indicate that GSM faces susceptibility to hacking byscript kiddies, a term referring to inexperienced individuals utilizing readily available hardware and software. The vulnerability arises from the accessibility of tools such as a DVB-T TV tuner, posing a threat to both mobile and network users. Despite the term "script kiddies" implying a lack of sophisticated skills, the consequences of their attacks on GSM can be severe, impacting the functionality ofcellular networks. Given that GSM continues to be the main source of cellular technology in numerous countries, its susceptibility to potential threats from malicious attacks is one that needs to be addressed.[30]
The development ofUMTSintroduced an optionalUniversal Subscriber Identity Module(USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and nonon-repudiation.
GSM uses several cryptographic algorithms for security. TheA5/1,A5/2, andA5/3stream ciphersare used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with aciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to useFPGAsthat allow A5/1 to be broken with arainbow tableattack.[31]The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and theircryptanalysishas been revealed in the literature. As an example,Karsten Nohldeveloped a number ofrainbow tables(static values which reduce the time needed to carry out an attack) and have found new sources forknown plaintext attacks.[32]He said that it is possible to build "a full GSM interceptor... from open-source components" but that they had not done so because of legal concerns.[33]Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen tovoicemail, make calls, or send text messages using a seven-year-oldMotorolacellphone and decryption software available for free online.[34]
GSM usesGeneral Packet Radio Service(GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.[35]
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software forsniffingGPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g.,Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used withUSIMto prevent connections to fake base stations anddowngrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it was not intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.[36][37][38]
The GSM systems and services are described in a set of standards governed byETSI, where a full list is maintained.[39]
Severalopen-source softwareprojects exist that provide certain GSM features,[40]such as abase transceiver stationbyOpenBTSdevelops aBase transceiver stationand theOsmocomstack providing various parts.[41]
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.[citation needed]
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patentterm adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whetherOpenBTSwill be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. As of 2011[update], there have been no lawsuits against users of OpenBTS over GSM use.[citation needed]
|
https://en.wikipedia.org/wiki/GSM#Encryption
|
Instatistics,econometricsand related fields,multidimensional analysis(MDA) is adata analysisprocess that groups data into two categories:data dimensionsand measurements. For example, adata setconsisting of the number of wins for a single football team at each of several years is a single-dimensional (in this case, longitudinal) data set. A data set consisting of the number of wins for several football teams in a single year is also a single-dimensional (in this case, cross-sectional) data set. A data set consisting of the number of wins for several football teams over several years is a two-dimensional data set.
In many disciplines, two-dimensional data sets are also calledpanel data.[1]While, strictly speaking, two- and higher-dimensional data sets are "multi-dimensional", the term "multidimensional" tends to be applied only to data sets with three or more dimensions.[2]For example, some forecast data sets provide forecasts for multiple target periods, conducted by multiple forecasters, and made at multiple horizons. The three dimensions provide more information than can be gleaned from two-dimensional panel data sets.
Computer software for MDA includeOnline analytical processing(OLAP) for data inrelational databases,pivot tablesfor data inspreadsheets, andArray DBMSsfor general multi-dimensional data (such asraster data) in science, engineering, and business.
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Multidimensional_analysis
|
Website spoofingis the act of creating awebsitewith the intention of misleading readers that the website has been created by a different person or organization.
Normally, the spoof website will adopt the design of the target website, and it sometimes has a similarURL.[1]A more sophisticated attack results in an attacker creating a "shadow copy" of theWorld Wide Webby having all of the victim's traffic go through the attacker's machine, causing the attacker to obtain the victim's sensitive information.[2]
Another technique is to use a 'cloaked' URL.[3]By usingdomain forwarding, or insertingcontrol characters, the URL can appear to be genuine while concealing the actual address of the malicious website.Punycodecan also be used for this purpose. Punycode-based attacks exploit the similar characters in different writing systems in common fonts. For example, on one large font, the greek letter tau (τ) is similar in appearance to the Latin lowercase letter t. However, the greek letter tau is represented in punycode as 5xa, while the Latin lowercase letter is simply represented as t, since it is present on the ASCII system. In 2017, a security researcher managed to register the domain xn--80ak6aa92e.com and have it show on several mainstream browsers as apple.com. While the characters used didn't belong to the latin script, due to the default font on those browsers, the end result was non-latin characters that were indistinguishable from those on the latin script.[4][5]
The objective may be fraudulent, often associated withphishing,e-mail spoofingor to lure potential victims to scams such as aget-rich-quick scheme, like in the case offakenews articles with sensational titles purporting of incidents involving popular celebrities with a forged interview discussing about and leading victims to acryptocurrency scam.[6][7]Because the purpose is often malicious, "spoof" (an expression whose base meaning is innocent parody) is a poor term for this activity so that more accountable organisations such as government departments and banks tend to avoid it, preferring more explicit descriptors such as "fraud", "counterfeit" or "phishing".[8][9]
A relatively more benign use of website spoofing is to criticize or make fun of the person or body whose website the spoofed site purports to represent. As an example of the use of this technique toparodyan organisation, in November 2006 two spoof websites, www.msfirefox.com and www.msfirefox.net, were produced claiming thatMicrosofthad boughtFirefoxand released "Microsoft Firefox 2007."[10]A similar incident occurred in 2023 when the culture jamming collectiveBarbie Liberation Organizationcreated a satirical parody page closely resembling theMattelcorporate website using the URL mattel-corporate.com[11]where they announced a fictitious line ofBarbiedolls called "MyCelia EcoWarrior" alongside a series of hoax videos with actressDaryl Hannahposing as a spokesperson for Mattel to lend further legitimacy to the nonexistent dolls, leveraging the publicity surrounding the2023 live-action film.[12]The website's heavy resemblance to the legitimate Mattel corporate site led to a number of news outletsmistakenly reporting it as real, to which they eventually issued a correction and removed the articles in question.[13][12]
Spoofed websites predominate in efforts developinganti-phishing softwarethough there are concerns about their effectiveness. A majority of efforts are focused on the PC market leaving mobile devices lacking.[14]
DNS is the layer at whichbotnetscontrol drones. In 2006,OpenDNSbegan offering a free service to prevent users from entering website spoofing sites. Essentially, OpenDNS has gathered a large database from various anti-phishing and anti-botnet organizations as well as its own data to compile a list of known website spoofing offenders. When a user attempts to access one of these bad websites, they are blocked at theDNSlevel.APWGstatistics show that most phishing attacks use URLs, not domain names, so there would be a large amount of website spoofing that OpenDNS would be unable to track. At the time of release, OpenDNS is unable to prevent unnamed phishing exploits that sit on Yahoo, Google etc.[15]
|
https://en.wikipedia.org/wiki/Website_spoofing
|
Inmathematics,Gödel's speed-up theorem, proved byGödel(1936), shows that there aretheoremswhoseproofscan be drastically shortened by working in more powerful axiomatic systems.
Kurt Gödelshowed how to find explicit examples of statements in formal systems that are provable in that system but whose shortest proof is unimaginably long. For example, the statement:
is provable in Peano arithmetic (PA) but the shortest proof has at least a googolplex symbols, by an argument similar to the proof ofGödel's first incompleteness theorem: If PA isconsistent, then it cannot prove the statement in fewer than a googolplex symbols, because the existence of such a proof would itself be a theorem of PA, acontradiction. But simply enumerating all strings of length up to a googolplex and checking that each such string is not a proof (in PA) of the statement, yields a proof of the statement (which is necessarily longer than a googolplex symbols).
The statement has a short proof in a more powerful system: in fact the proof given in the previous paragraph is a proof in the system of Peano arithmetic plus the statement "Peano arithmetic is consistent" (which, per the incompleteness theorem, cannot be proved in Peano arithmetic).
In this argument, Peano arithmetic can be replaced by any more powerful consistent system, and a googolplex can be replaced by any number that can be described concisely in the system.
Harvey Friedmanfound some explicit natural examples of this phenomenon, giving some explicit statements in Peano arithmetic and other formal systems whose shortest proofs are ridiculously long (Smoryński 1982). For example, the statement
is provable in Peano arithmetic, but the shortest proof has length at leastA(1000), whereA(0) = 1 andA(n+1) = 2A(n). The statement is a special case ofKruskal's theoremand has a short proof insecond order arithmetic.
If one takes Peano arithmetic together with thenegationof the statement above, one obtains an inconsistent theory whose shortest known contradiction is equivalently long.
|
https://en.wikipedia.org/wiki/G%C3%B6del%27s_speed-up_theorem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.